Creates an edited or extended image given one or more source images and a prompt. This endpoint supports GPT Image models (gpt-image-1.5, gpt-image-1, gpt-image-1-mini, and chatgpt-image-latest) and dall-e-2.
ParametersExpand Collapse
body ImageEditParams
Image param.Field[ImageEditParamsImageUnion]The image(s) to edit. Must be a supported image file or an array of images.
For the GPT image models (gpt-image-1, gpt-image-1-mini, and gpt-image-1.5), each image should be a png, webp, or jpg
file less than 50MB. You can provide up to 16 images.
chatgpt-image-latest follows the same input constraints as GPT image models.
For dall-e-2, you can only provide one image, and it should be a square
png file less than 4MB.
The image(s) to edit. Must be a supported image file or an array of images.
For the GPT image models (gpt-image-1, gpt-image-1-mini, and gpt-image-1.5), each image should be a png, webp, or jpg
file less than 50MB. You can provide up to 16 images.
chatgpt-image-latest follows the same input constraints as GPT image models.
For dall-e-2, you can only provide one image, and it should be a square
png file less than 4MB.
A text description of the desired image(s). The maximum length is 1000 characters for dall-e-2, and 32000 characters for the GPT image models.
Allows to set transparency for the background of the generated image(s).
This parameter is only supported for the GPT image models. Must be one of
transparent, opaque or auto (default value). When auto is used, the
model will automatically determine the best background for the image.
If transparent, the output format needs to support transparency, so it
should be set to either png (default value) or webp.
Allows to set transparency for the background of the generated image(s).
This parameter is only supported for the GPT image models. Must be one of
transparent, opaque or auto (default value). When auto is used, the
model will automatically determine the best background for the image.
If transparent, the output format needs to support transparency, so it
should be set to either png (default value) or webp.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. If there are multiple images provided, the mask will be applied on the first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.
The model to use for image generation. Defaults to gpt-image-1.5.
The model to use for image generation. Defaults to gpt-image-1.5.
The number of images to generate. Must be between 1 and 10.
The compression level (0-100%) for the generated images. This parameter
is only supported for the GPT image models with the webp or jpeg output
formats, and defaults to 100.
The format in which the generated images are returned. This parameter is
only supported for the GPT image models. Must be one of png, jpeg, or webp.
The default value is png.
The format in which the generated images are returned. This parameter is
only supported for the GPT image models. Must be one of png, jpeg, or webp.
The default value is png.
The number of partial images to generate. This parameter is used for streaming responses that return partial images. Value must be between 0 and 3. When set to 0, the response will be a single image sent in one streaming event.
Note that the final image may be sent before the full number of partial images are generated if the full image is generated more quickly.
The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter is only supported for dall-e-2 (default is url for dall-e-2), as GPT image models always return base64-encoded images.
The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter is only supported for dall-e-2 (default is url for dall-e-2), as GPT image models always return base64-encoded images.
The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default value) for the GPT image models, and one of 256x256, 512x512, or 1024x1024 for dall-e-2.
The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default value) for the GPT image models, and one of 256x256, 512x512, or 1024x1024 for dall-e-2.
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
ReturnsExpand Collapse
type ImagesResponse struct{…}The response from the image generation endpoint.
The response from the image generation endpoint.
Background ImagesResponseBackgroundoptionalThe background parameter used for the image generation. Either transparent or opaque.
The background parameter used for the image generation. Either transparent or opaque.
The list of generated images.
The list of generated images.
The base64-encoded JSON of the generated image. Returned by default for the GPT image models, and only present if response_format is set to b64_json for dall-e-2 and dall-e-3.
OutputFormat ImagesResponseOutputFormatoptionalThe output format of the image generation. Either png, webp, or jpeg.
The output format of the image generation. Either png, webp, or jpeg.
Quality ImagesResponseQualityoptionalThe quality of the image generated. Either low, medium, or high.
The quality of the image generated. Either low, medium, or high.
Size ImagesResponseSizeoptionalThe size of the image generated. Either 1024x1024, 1024x1536, or 1536x1024.
The size of the image generated. Either 1024x1024, 1024x1536, or 1536x1024.
type ImageEditStreamEventUnion interface{…}Emitted when a partial image is available during image editing streaming.
Emitted when a partial image is available during image editing streaming.
type ImageEditPartialImageEvent struct{…}Emitted when a partial image is available during image editing streaming.
Emitted when a partial image is available during image editing streaming.
Background ImageEditPartialImageEventBackgroundThe background setting for the requested edited image.
The background setting for the requested edited image.
OutputFormat ImageEditPartialImageEventOutputFormatThe output format for the requested edited image.
The output format for the requested edited image.
Quality ImageEditPartialImageEventQualityThe quality setting for the requested edited image.
The quality setting for the requested edited image.
type ImageEditCompletedEvent struct{…}Emitted when image editing has completed and the final image is available.
Emitted when image editing has completed and the final image is available.
Quality ImageEditCompletedEventQualityThe quality setting for the edited image.
The quality setting for the edited image.
Create image edit
package main
import (
"bytes"
"context"
"fmt"
"io"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
)
func main() {
client := openai.NewClient(
option.WithAPIKey("My API Key"),
)
imagesResponse, err := client.Images.Edit(context.TODO(), openai.ImageEditParams{
Image: openai.ImageEditParamsImageUnion{
OfFile: io.Reader(bytes.NewBuffer([]byte("Example data"))),
},
Prompt: "A cute baby sea otter wearing a beret",
})
if err != nil {
panic(err.Error())
}
fmt.Printf("%+v\n", imagesResponse)
}
{
"created": 0,
"background": "transparent",
"data": [
{
"b64_json": "b64_json",
"revised_prompt": "revised_prompt",
"url": "url"
}
],
"output_format": "png",
"quality": "low",
"size": "1024x1024",
"usage": {
"input_tokens": 0,
"input_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
},
"output_tokens": 0,
"total_tokens": 0,
"output_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
}
}
}Returns Examples
{
"created": 0,
"background": "transparent",
"data": [
{
"b64_json": "b64_json",
"revised_prompt": "revised_prompt",
"url": "url"
}
],
"output_format": "png",
"quality": "low",
"size": "1024x1024",
"usage": {
"input_tokens": 0,
"input_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
},
"output_tokens": 0,
"total_tokens": 0,
"output_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
}
}
}