Skip to content

Images

Create image
client.images.generate(ImageGenerateParamsbody, RequestOptionsoptions?): ImagesResponse { created, background, data, 4 more } | Stream<ImageGenStreamEvent>
POST/images/generations
Create image edit
client.images.edit(ImageEditParamsbody, RequestOptionsoptions?): ImagesResponse { created, background, data, 4 more } | Stream<ImageEditStreamEvent>
POST/images/edits
Create image variation
client.images.createVariation(ImageCreateVariationParams { image, model, n, 3 more } body, RequestOptionsoptions?): ImagesResponse { created, background, data, 4 more }
POST/images/variations
ModelsExpand Collapse
Image { b64_json, revised_prompt, url }

Represents the content or the URL of an image generated by the OpenAI API.

b64_json?: string

The base64-encoded JSON of the generated image. Returned by default for the GPT image models, and only present if response_format is set to b64_json for dall-e-2 and dall-e-3.

revised_prompt?: string

For dall-e-3 only, the revised prompt that was used to generate the image.

url?: string

When using dall-e-2 or dall-e-3, the URL of the generated image if response_format is set to url (default value). Unsupported for the GPT image models.

ImageEditCompletedEvent { b64_json, background, created_at, 5 more }

Emitted when image editing has completed and the final image is available.

b64_json: string

Base64-encoded final edited image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the edited image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the edited image.

Accepts one of the following:
"png"
"webp"
"jpeg"
quality: "low" | "medium" | "high" | "auto"

The quality setting for the edited image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the edited image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_edit.completed"

The type of the event. Always image_edit.completed.

usage: Usage { input_tokens, input_tokens_details, output_tokens, total_tokens }

For the GPT image models only, the token usage information for the image generation.

input_tokens: number

The number of tokens (images and text) in the input prompt.

input_tokens_details: InputTokensDetails { image_tokens, text_tokens }

The input tokens detailed information for the image generation.

image_tokens: number

The number of image tokens in the input prompt.

text_tokens: number

The number of text tokens in the input prompt.

output_tokens: number

The number of image tokens in the output image.

total_tokens: number

The total number of tokens (images and text) used for the image generation.

ImageEditPartialImageEvent { b64_json, background, created_at, 5 more }

Emitted when a partial image is available during image editing streaming.

b64_json: string

Base64-encoded partial image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the requested edited image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the requested edited image.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_image_index: number

0-based index for the partial image (streaming).

quality: "low" | "medium" | "high" | "auto"

The quality setting for the requested edited image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the requested edited image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_edit.partial_image"

The type of the event. Always image_edit.partial_image.

ImageEditStreamEvent = ImageEditPartialImageEvent { b64_json, background, created_at, 5 more } | ImageEditCompletedEvent { b64_json, background, created_at, 5 more }

Emitted when a partial image is available during image editing streaming.

Accepts one of the following:
ImageEditPartialImageEvent { b64_json, background, created_at, 5 more }

Emitted when a partial image is available during image editing streaming.

b64_json: string

Base64-encoded partial image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the requested edited image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the requested edited image.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_image_index: number

0-based index for the partial image (streaming).

quality: "low" | "medium" | "high" | "auto"

The quality setting for the requested edited image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the requested edited image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_edit.partial_image"

The type of the event. Always image_edit.partial_image.

ImageEditCompletedEvent { b64_json, background, created_at, 5 more }

Emitted when image editing has completed and the final image is available.

b64_json: string

Base64-encoded final edited image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the edited image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the edited image.

Accepts one of the following:
"png"
"webp"
"jpeg"
quality: "low" | "medium" | "high" | "auto"

The quality setting for the edited image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the edited image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_edit.completed"

The type of the event. Always image_edit.completed.

usage: Usage { input_tokens, input_tokens_details, output_tokens, total_tokens }

For the GPT image models only, the token usage information for the image generation.

input_tokens: number

The number of tokens (images and text) in the input prompt.

input_tokens_details: InputTokensDetails { image_tokens, text_tokens }

The input tokens detailed information for the image generation.

image_tokens: number

The number of image tokens in the input prompt.

text_tokens: number

The number of text tokens in the input prompt.

output_tokens: number

The number of image tokens in the output image.

total_tokens: number

The total number of tokens (images and text) used for the image generation.

ImageGenCompletedEvent { b64_json, background, created_at, 5 more }

Emitted when image generation has completed and the final image is available.

b64_json: string

Base64-encoded image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the generated image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the generated image.

Accepts one of the following:
"png"
"webp"
"jpeg"
quality: "low" | "medium" | "high" | "auto"

The quality setting for the generated image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_generation.completed"

The type of the event. Always image_generation.completed.

usage: Usage { input_tokens, input_tokens_details, output_tokens, total_tokens }

For the GPT image models only, the token usage information for the image generation.

input_tokens: number

The number of tokens (images and text) in the input prompt.

input_tokens_details: InputTokensDetails { image_tokens, text_tokens }

The input tokens detailed information for the image generation.

image_tokens: number

The number of image tokens in the input prompt.

text_tokens: number

The number of text tokens in the input prompt.

output_tokens: number

The number of image tokens in the output image.

total_tokens: number

The total number of tokens (images and text) used for the image generation.

ImageGenPartialImageEvent { b64_json, background, created_at, 5 more }

Emitted when a partial image is available during image generation streaming.

b64_json: string

Base64-encoded partial image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the requested image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the requested image.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_image_index: number

0-based index for the partial image (streaming).

quality: "low" | "medium" | "high" | "auto"

The quality setting for the requested image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the requested image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_generation.partial_image"

The type of the event. Always image_generation.partial_image.

ImageGenStreamEvent = ImageGenPartialImageEvent { b64_json, background, created_at, 5 more } | ImageGenCompletedEvent { b64_json, background, created_at, 5 more }

Emitted when a partial image is available during image generation streaming.

Accepts one of the following:
ImageGenPartialImageEvent { b64_json, background, created_at, 5 more }

Emitted when a partial image is available during image generation streaming.

b64_json: string

Base64-encoded partial image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the requested image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the requested image.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_image_index: number

0-based index for the partial image (streaming).

quality: "low" | "medium" | "high" | "auto"

The quality setting for the requested image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the requested image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_generation.partial_image"

The type of the event. Always image_generation.partial_image.

ImageGenCompletedEvent { b64_json, background, created_at, 5 more }

Emitted when image generation has completed and the final image is available.

b64_json: string

Base64-encoded image data, suitable for rendering as an image.

background: "transparent" | "opaque" | "auto"

The background setting for the generated image.

Accepts one of the following:
"transparent"
"opaque"
"auto"
created_at: number

The Unix timestamp when the event was created.

output_format: "png" | "webp" | "jpeg"

The output format for the generated image.

Accepts one of the following:
"png"
"webp"
"jpeg"
quality: "low" | "medium" | "high" | "auto"

The quality setting for the generated image.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
type: "image_generation.completed"

The type of the event. Always image_generation.completed.

usage: Usage { input_tokens, input_tokens_details, output_tokens, total_tokens }

For the GPT image models only, the token usage information for the image generation.

input_tokens: number

The number of tokens (images and text) in the input prompt.

input_tokens_details: InputTokensDetails { image_tokens, text_tokens }

The input tokens detailed information for the image generation.

image_tokens: number

The number of image tokens in the input prompt.

text_tokens: number

The number of text tokens in the input prompt.

output_tokens: number

The number of image tokens in the output image.

total_tokens: number

The total number of tokens (images and text) used for the image generation.

ImageModel = "gpt-image-1.5" | "dall-e-2" | "dall-e-3" | 2 more
Accepts one of the following:
"gpt-image-1.5"
"dall-e-2"
"dall-e-3"
"gpt-image-1"
"gpt-image-1-mini"
ImagesResponse { created, background, data, 4 more }

The response from the image generation endpoint.

created: number

The Unix timestamp (in seconds) of when the image was created.

background?: "transparent" | "opaque"

The background parameter used for the image generation. Either transparent or opaque.

Accepts one of the following:
"transparent"
"opaque"
data?: Array<Image { b64_json, revised_prompt, url } >

The list of generated images.

b64_json?: string

The base64-encoded JSON of the generated image. Returned by default for the GPT image models, and only present if response_format is set to b64_json for dall-e-2 and dall-e-3.

revised_prompt?: string

For dall-e-3 only, the revised prompt that was used to generate the image.

url?: string

When using dall-e-2 or dall-e-3, the URL of the generated image if response_format is set to url (default value). Unsupported for the GPT image models.

output_format?: "png" | "webp" | "jpeg"

The output format of the image generation. Either png, webp, or jpeg.

Accepts one of the following:
"png"
"webp"
"jpeg"
quality?: "low" | "medium" | "high"

The quality of the image generated. Either low, medium, or high.

Accepts one of the following:
"low"
"medium"
"high"
size?: "1024x1024" | "1024x1536" | "1536x1024"

The size of the image generated. Either 1024x1024, 1024x1536, or 1536x1024.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
usage?: Usage { input_tokens, input_tokens_details, output_tokens, 2 more }

For gpt-image-1 only, the token usage information for the image generation.

input_tokens: number

The number of tokens (images and text) in the input prompt.

input_tokens_details: InputTokensDetails { image_tokens, text_tokens }

The input tokens detailed information for the image generation.

image_tokens: number

The number of image tokens in the input prompt.

text_tokens: number

The number of text tokens in the input prompt.

output_tokens: number

The number of output tokens generated by the model.

total_tokens: number

The total number of tokens (images and text) used for the image generation.

output_tokens_details?: OutputTokensDetails { image_tokens, text_tokens }

The output token details for the image generation.

image_tokens: number

The number of image output tokens generated by the model.

text_tokens: number

The number of text output tokens generated by the model.