Creates an image given a prompt. Learn more.
ParametersExpand Collapse
body ImageGenerateParams
A text description of the desired image(s). The maximum length is 32000 characters for the GPT image models, 1000 characters for dall-e-2 and 4000 characters for dall-e-3.
Allows to set transparency for the background of the generated image(s).
This parameter is only supported for the GPT image models. Must be one of
transparent, opaque or auto (default value). When auto is used, the
model will automatically determine the best background for the image.
If transparent, the output format needs to support transparency, so it
should be set to either png (default value) or webp.
Allows to set transparency for the background of the generated image(s).
This parameter is only supported for the GPT image models. Must be one of
transparent, opaque or auto (default value). When auto is used, the
model will automatically determine the best background for the image.
If transparent, the output format needs to support transparency, so it
should be set to either png (default value) or webp.
The model to use for image generation. One of dall-e-2, dall-e-3, or a GPT image model (gpt-image-1, gpt-image-1-mini, gpt-image-1.5). Defaults to dall-e-2 unless a parameter specific to the GPT image models is used.
The model to use for image generation. One of dall-e-2, dall-e-3, or a GPT image model (gpt-image-1, gpt-image-1-mini, gpt-image-1.5). Defaults to dall-e-2 unless a parameter specific to the GPT image models is used.
type ImageModel string
Control the content-moderation level for images generated by the GPT image models. Must be either low for less restrictive filtering or auto (default value).
Control the content-moderation level for images generated by the GPT image models. Must be either low for less restrictive filtering or auto (default value).
The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.
The compression level (0-100%) for the generated images. This parameter is only supported for the GPT image models with the webp or jpeg output formats, and defaults to 100.
The format in which the generated images are returned. This parameter is only supported for the GPT image models. Must be one of png, jpeg, or webp.
The format in which the generated images are returned. This parameter is only supported for the GPT image models. Must be one of png, jpeg, or webp.
The number of partial images to generate. This parameter is used for streaming responses that return partial images. Value must be between 0 and 3. When set to 0, the response will be a single image sent in one streaming event.
Note that the final image may be sent before the full number of partial images are generated if the full image is generated more quickly.
The quality of the image that will be generated.
auto (default value) will automatically select the best quality for the given model.
high, medium and low are supported for the GPT image models.
hd and standard are supported for dall-e-3.
standard is the only option for dall-e-2.
The quality of the image that will be generated.
auto(default value) will automatically select the best quality for the given model.high,mediumandloware supported for the GPT image models.hdandstandardare supported fordall-e-3.standardis the only option fordall-e-2.
The format in which generated images with dall-e-2 and dall-e-3 are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter isn't supported for the GPT image models, which always return base64-encoded images.
The format in which generated images with dall-e-2 and dall-e-3 are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter isn't supported for the GPT image models, which always return base64-encoded images.
The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default value) for the GPT image models, one of 256x256, 512x512, or 1024x1024 for dall-e-2, and one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3.
The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default value) for the GPT image models, one of 256x256, 512x512, or 1024x1024 for dall-e-2, and one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3.
The style of the generated images. This parameter is only supported for dall-e-3. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images.
The style of the generated images. This parameter is only supported for dall-e-3. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images.
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
ReturnsExpand Collapse
type ImagesResponse struct{…}The response from the image generation endpoint.
The response from the image generation endpoint.
The Unix timestamp (in seconds) of when the image was created.
Background ImagesResponseBackgroundoptionalThe background parameter used for the image generation. Either transparent or opaque.
The background parameter used for the image generation. Either transparent or opaque.
The list of generated images.
The list of generated images.
The base64-encoded JSON of the generated image. Returned by default for the GPT image models, and only present if response_format is set to b64_json for dall-e-2 and dall-e-3.
For dall-e-3 only, the revised prompt that was used to generate the image.
When using dall-e-2 or dall-e-3, the URL of the generated image if response_format is set to url (default value). Unsupported for the GPT image models.
OutputFormat ImagesResponseOutputFormatoptionalThe output format of the image generation. Either png, webp, or jpeg.
The output format of the image generation. Either png, webp, or jpeg.
Quality ImagesResponseQualityoptionalThe quality of the image generated. Either low, medium, or high.
The quality of the image generated. Either low, medium, or high.
Size ImagesResponseSizeoptionalThe size of the image generated. Either 1024x1024, 1024x1536, or 1536x1024.
The size of the image generated. Either 1024x1024, 1024x1536, or 1536x1024.
Usage ImagesResponseUsageoptionalFor gpt-image-1 only, the token usage information for the image generation.
For gpt-image-1 only, the token usage information for the image generation.
The number of tokens (images and text) in the input prompt.
InputTokensDetails ImagesResponseUsageInputTokensDetailsThe input tokens detailed information for the image generation.
The input tokens detailed information for the image generation.
The number of image tokens in the input prompt.
The number of text tokens in the input prompt.
The number of output tokens generated by the model.
The total number of tokens (images and text) used for the image generation.
OutputTokensDetails ImagesResponseUsageOutputTokensDetailsoptionalThe output token details for the image generation.
The output token details for the image generation.
The number of image output tokens generated by the model.
The number of text output tokens generated by the model.
type ImageGenStreamEventUnion interface{…}Emitted when a partial image is available during image generation streaming.
Emitted when a partial image is available during image generation streaming.
type ImageGenPartialImageEvent struct{…}Emitted when a partial image is available during image generation streaming.
Emitted when a partial image is available during image generation streaming.
Base64-encoded partial image data, suitable for rendering as an image.
Background ImageGenPartialImageEventBackgroundThe background setting for the requested image.
The background setting for the requested image.
The Unix timestamp when the event was created.
OutputFormat ImageGenPartialImageEventOutputFormatThe output format for the requested image.
The output format for the requested image.
0-based index for the partial image (streaming).
Quality ImageGenPartialImageEventQualityThe quality setting for the requested image.
The quality setting for the requested image.
Size ImageGenPartialImageEventSizeThe size of the requested image.
The size of the requested image.
The type of the event. Always image_generation.partial_image.
type ImageGenCompletedEvent struct{…}Emitted when image generation has completed and the final image is available.
Emitted when image generation has completed and the final image is available.
Base64-encoded image data, suitable for rendering as an image.
Background ImageGenCompletedEventBackgroundThe background setting for the generated image.
The background setting for the generated image.
The Unix timestamp when the event was created.
OutputFormat ImageGenCompletedEventOutputFormatThe output format for the generated image.
The output format for the generated image.
Quality ImageGenCompletedEventQualityThe quality setting for the generated image.
The quality setting for the generated image.
Size ImageGenCompletedEventSizeThe size of the generated image.
The size of the generated image.
The type of the event. Always image_generation.completed.
Usage ImageGenCompletedEventUsageFor the GPT image models only, the token usage information for the image generation.
For the GPT image models only, the token usage information for the image generation.
The number of tokens (images and text) in the input prompt.
InputTokensDetails ImageGenCompletedEventUsageInputTokensDetailsThe input tokens detailed information for the image generation.
The input tokens detailed information for the image generation.
The number of image tokens in the input prompt.
The number of text tokens in the input prompt.
The number of image tokens in the output image.
The total number of tokens (images and text) used for the image generation.
Create image
package main
import (
"context"
"fmt"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
)
func main() {
client := openai.NewClient(
option.WithAPIKey("My API Key"),
)
imagesResponse, err := client.Images.Generate(context.TODO(), openai.ImageGenerateParams{
Prompt: "A cute baby sea otter",
})
if err != nil {
panic(err.Error())
}
fmt.Printf("%+v\n", imagesResponse)
}
{
"created": 0,
"background": "transparent",
"data": [
{
"b64_json": "b64_json",
"revised_prompt": "revised_prompt",
"url": "url"
}
],
"output_format": "png",
"quality": "low",
"size": "1024x1024",
"usage": {
"input_tokens": 0,
"input_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
},
"output_tokens": 0,
"total_tokens": 0,
"output_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
}
}
}Returns Examples
{
"created": 0,
"background": "transparent",
"data": [
{
"b64_json": "b64_json",
"revised_prompt": "revised_prompt",
"url": "url"
}
],
"output_format": "png",
"quality": "low",
"size": "1024x1024",
"usage": {
"input_tokens": 0,
"input_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
},
"output_tokens": 0,
"total_tokens": 0,
"output_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
}
}
}