Overview
The OpenAI API lets you generate and edit images from text prompts, using GPT Image or DALL·E models. You can access image generation capabilities through two APIs:
Image API
The Image API provides three endpoints, each with distinct capabilities:
- Generations: Generate images from scratch based on a text prompt
- Edits: Modify existing images using a new prompt, either partially or entirely
- Variations: Generate variations of an existing image (available with DALL·E 2 only)
This API supports GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini) as well as dall-e-2 and dall-e-3.
Responses API
The Responses API allows you to generate images as part of conversations or multi-step flows. It supports image generation as a built-in tool, and accepts image inputs and outputs within context.
Compared to the Image API, it adds:
- Multi-turn editing: Iteratively make high fidelity edits to images with prompting
- Flexible inputs: Accept image File IDs as input images, not just bytes
The image generation tool in responses uses GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini).
When using gpt-image-1.5 and chatgpt-image-latest with the Responses API, you can optionally set the action parameter, detailed below.
For a list of mainline models that support calling this tool, refer to the supported models below.
Choosing the right API
- If you only need to generate or edit a single image from one prompt, the Image API is your best choice.
- If you want to build conversational, editable image experiences with GPT Image, go with the Responses API.
Both APIs let you customize output — adjust quality, size, format, compression, and enable transparent backgrounds.
DALL·E 2 is our oldest image generation model and therefore has significant limitations. For a better experience, we recommend using GPT Image.
DALL·E 3 is our previous generation model and has some limitations. For a better experience, we recommend using GPT Image.
Model comparison
Our latest and most advanced model for image generation is gpt-image-1.5, a natively multimodal language model, part of the GPT Image family.
GPT Image models include gpt-image-1.5 (state of the art), gpt-image-1, and gpt-image-1-mini. They share the same API surface, with gpt-image-1.5 offering the best overall quality.
We recommend using gpt-image-1.5 for the best experience, but if you are looking for a more cost-effective option and image quality isn’t a priority, you can use gpt-image-1-mini.
You can also use specialized image generation models—DALL·E 2 and DALL·E 3—with the Image API, but please note these models are now deprecated and we will stop supporting them on 05/12, 2026.
| Model | Endpoints | Use case |
|---|---|---|
| DALL·E 2 | Image API: Generations, Edits, Variations | Lower cost, concurrent requests, inpainting (image editing with a mask) |
| DALL·E 3 | Image API: Generations only | Higher image quality than DALL·E 2, support for larger resolutions |
| GPT Image | Image API: Generations, Edits – Responses API (as part of the image generation tool) | Superior instruction following, text rendering, detailed editing, real-world knowledge |
This guide focuses on GPT Image, but you can also switch to the docs for DALL·E 2 and DALL·E 3.
To ensure this model is used responsibly, you may need to complete the API
Organization
Verification
from your developer
console before
using GPT Image models, including gpt-image-1.5, gpt-image-1, and
gpt-image-1-mini.

Generate Images
You can use the image generation endpoint to create images based on text prompts, or the image generation tool in the Responses API to generate images as part of a conversation.
To learn more about customizing the output (size, quality, format, transparency), refer to the customize image output section below.
You can set the n parameter to generate multiple images at once in a single request (by default, the API returns a single image).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from openai import OpenAI
import base64
client = OpenAI()
response = client.responses.create(
model="gpt-5",
input="Generate an image of gray tabby cat hugging an otter with an orange scarf",
tools=[{"type": "image_generation"}],
)
# Save the image to a file
image_data = [
output.result
for output in response.output
if output.type == "image_generation_call"
]
if image_data:
image_base64 = image_data[0]
with open("otter.png", "wb") as f:
f.write(base64.b64decode(image_base64))1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
from openai import OpenAI
import base64
client = OpenAI()
prompt = """
A children's book drawing of a veterinarian using a stethoscope to
listen to the heartbeat of a baby otter.
"""
result = client.images.generate(
model="gpt-image-1",
prompt=prompt
)
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
# Save the image to a file
with open("otter.png", "wb") as f:
f.write(image_bytes)Multi-turn image generation
With the Responses API, you can build multi-turn conversations involving image generation either by providing image generation calls outputs within context (you can also just use the image ID), or by using the previous_response_id parameter.
This makes it easy to iterate on images across multiple turns—refining prompts, applying new instructions, and evolving the visual output as the conversation progresses.
Generate vs Edit
With the Responses API you can choose whether to generate a new image or edit one already in the conversation.
The optional action parameter (supported on gpt-image-1.5 and chatgpt-image-latest) controls this behavior: keep action: "auto" to let the model decide (recommended), set action: "generate" to always create a new image, or set action: "edit" to force editing (requires an image in context).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from openai import OpenAI
import base64
client = OpenAI()
response = client.responses.create(
model="gpt-5",
input="Generate an image of gray tabby cat hugging an otter with an orange scarf",
tools=[{"type": "image_generation", "action": "generate"}],
)
# Save the image to a file
image_data = [
output.result
for output in response.output
if output.type == "image_generation_call"
]
if image_data:
image_base64 = image_data[0]
with open("otter.png", "wb") as f:
f.write(base64.b64decode(image_base64))If you force edit without providing an image in context, the call will
return an error. Leave action at auto to have the model decide when to
generate or edit.
When action is set to auto, the image_generation_call result includes an action field so you can see whether the model generated a new image or edited one already in context:
1
2
3
4
5
6
7
8
9
10
11
12
{
"id": "ig_123...",
"type": "image_generation_call",
"status": "completed",
"background": "opaque",
"output_format": "jpeg",
"quality": "medium",
"result": "/9j/4...",
"revised_prompt": "...",
"size": "1024x1024",
"action": "generate"
}1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
from openai import OpenAI
import base64
client = OpenAI()
response = client.responses.create(
model="gpt-5",
input="Generate an image of gray tabby cat hugging an otter with an orange scarf",
tools=[{"type": "image_generation"}],
)
image_data = [
output.result
for output in response.output
if output.type == "image_generation_call"
]
if image_data:
image_base64 = image_data[0]
with open("cat_and_otter.png", "wb") as f:
f.write(base64.b64decode(image_base64))
# Follow up
response_fwup = client.responses.create(
model="gpt-5",
previous_response_id=response.id,
input="Now make it look realistic",
tools=[{"type": "image_generation"}],
)
image_data_fwup = [
output.result
for output in response_fwup.output
if output.type == "image_generation_call"
]
if image_data_fwup:
image_base64 = image_data_fwup[0]
with open("cat_and_otter_realistic.png", "wb") as f:
f.write(base64.b64decode(image_base64))1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
import openai
import base64
response = openai.responses.create(
model="gpt-5",
input="Generate an image of gray tabby cat hugging an otter with an orange scarf",
tools=[{"type": "image_generation"}],
)
image_generation_calls = [
output
for output in response.output
if output.type == "image_generation_call"
]
image_data = [output.result for output in image_generation_calls]
if image_data:
image_base64 = image_data[0]
with open("cat_and_otter.png", "wb") as f:
f.write(base64.b64decode(image_base64))
# Follow up
response_fwup = openai.responses.create(
model="gpt-5",
input=[
{
"role": "user",
"content": [{"type": "input_text", "text": "Now make it look realistic"}],
},
{
"type": "image_generation_call",
"id": image_generation_calls[0].id,
},
],
tools=[{"type": "image_generation"}],
)
image_data_fwup = [
output.result
for output in response_fwup.output
if output.type == "image_generation_call"
]
if image_data_fwup:
image_base64 = image_data_fwup[0]
with open("cat_and_otter_realistic.png", "wb") as f:
f.write(base64.b64decode(image_base64))Result
“Generate an image of gray tabby cat hugging an otter with an orange scarf” | ![]() |
“Now make it look realistic” | ![]() |
Streaming
The Responses API and Image API support streaming image generation. This allows you to stream partial images as they are generated, providing a more interactive experience.
You can adjust the partial_images parameter to receive 0-3 partial images.
- If you set
partial_imagesto 0, you will only receive the final image. - For values larger than zero, you may not receive the full number of partial images you requested if the full image is generated more quickly.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
from openai import OpenAI
import base64
client = OpenAI()
stream = client.responses.create(
model="gpt-4.1",
input="Draw a gorgeous image of a river made of white owl feathers, snaking its way through a serene winter landscape",
stream=True,
tools=[{"type": "image_generation", "partial_images": 2}],
)
for event in stream:
if event.type == "response.image_generation_call.partial_image":
idx = event.partial_image_index
image_base64 = event.partial_image_b64
image_bytes = base64.b64decode(image_base64)
with open(f"river{idx}.png", "wb") as f:
f.write(image_bytes)1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
from openai import OpenAI
import base64
client = OpenAI()
stream = client.images.generate(
prompt="Draw a gorgeous image of a river made of white owl feathers, snaking its way through a serene winter landscape",
model="gpt-image-1",
stream=True,
partial_images=2,
)
for event in stream:
if event.type == "image_generation.partial_image":
idx = event.partial_image_index
image_base64 = event.b64_json
image_bytes = base64.b64decode(image_base64)
with open(f"river{idx}.png", "wb") as f:
f.write(image_bytes)Result
| Partial 1 | Partial 2 | Final image |
|---|---|---|
![]() | ![]() | ![]() |
Prompt: Draw a gorgeous image of a river made of white owl feathers, snaking its way through a serene winter landscape
Revised prompt
When using the image generation tool in the Responses API, the mainline model (e.g. gpt-4.1) will automatically revise your prompt for improved performance.
You can access the revised prompt in the revised_prompt field of the image generation call:
1
2
3
4
5
6
7
{
"id": "ig_123",
"type": "image_generation_call",
"status": "completed",
"revised_prompt": "A gray tabby cat hugging an otter. The otter is wearing an orange scarf. Both animals are cute and friendly, depicted in a warm, heartwarming style.",
"result": "..."
}You can use the image generation endpoint to create images based on text prompts. To learn more about customizing the output (size, quality, format, transparency), refer to the customize image output section below.
You can set the n parameter to generate multiple images at once in a single request (by default, the API returns a single image).
1
2
3
4
5
6
7
8
9
10
11
12
from openai import OpenAI
client = OpenAI()
result = client.images.generate(
model="dall-e-2",
prompt="a white siamese cat",
size="1024x1024",
quality="standard",
n=1,
)
print(result.data[0].url)You can use the image generation endpoint to create images based on text prompts. To learn more about customizing the output (size, quality, format, transparency), refer to the customize image output section below.
1
2
3
4
5
6
7
8
9
10
from openai import OpenAI
client = OpenAI()
result = client.images.generate(
model="dall-e-3",
prompt="a white siamese cat",
size="1024x1024"
)
print(result.data[0].url)Prompting tips
When you use DALL·E 3, OpenAI automatically rewrites your prompt for safety reasons and to add more detail.
You can’t disable this feature, but you can get outputs closer to your requested image by adding the following to your prompt:
I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:
The updated prompt is visible in the revised_prompt field of the data response object.
Edit Images
The image edits endpoint lets you:
- Edit existing images
- Generate new images using other images as a reference
- Edit parts of an image by uploading an image and mask indicating which areas should be replaced (a process known as inpainting)
Create a new image using image references
You can use one or more images as a reference to generate a new image.
In this example, we’ll use 4 input images to generate a new image of a gift basket containing the items in the reference images.
With the Responses API, you can provide input images in 3 different ways:
- By providing a fully qualified URL
- By providing an image as a Base64-encoded data URL
- By providing a file ID (created with the Files API)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
from openai import OpenAI
import base64
client = OpenAI()
prompt = """Generate a photorealistic image of a gift basket on a white background
labeled 'Relax & Unwind' with a ribbon and handwriting-like font,
containing all the items in the reference pictures."""
base64_image1 = encode_image("body-lotion.png")
base64_image2 = encode_image("soap.png")
file_id1 = create_file("body-lotion.png")
file_id2 = create_file("incense-kit.png")
response = client.responses.create(
model="gpt-4.1",
input=[
{
"role": "user",
"content": [
{"type": "input_text", "text": prompt},
{
"type": "input_image",
"image_url": f"data:image/jpeg;base64,{base64_image1}",
},
{
"type": "input_image",
"image_url": f"data:image/jpeg;base64,{base64_image2}",
},
{
"type": "input_image",
"file_id": file_id1,
},
{
"type": "input_image",
"file_id": file_id2,
}
],
}
],
tools=[{"type": "image_generation"}],
)
image_generation_calls = [
output
for output in response.output
if output.type == "image_generation_call"
]
image_data = [output.result for output in image_generation_calls]
if image_data:
image_base64 = image_data[0]
with open("gift-basket.png", "wb") as f:
f.write(base64.b64decode(image_base64))
else:
print(response.output.content)1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import base64
from openai import OpenAI
client = OpenAI()
prompt = """
Generate a photorealistic image of a gift basket on a white background
labeled 'Relax & Unwind' with a ribbon and handwriting-like font,
containing all the items in the reference pictures.
"""
result = client.images.edit(
model="gpt-image-1",
image=[
open("body-lotion.png", "rb"),
open("bath-bomb.png", "rb"),
open("incense-kit.png", "rb"),
open("soap.png", "rb"),
],
prompt=prompt
)
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
# Save the image to a file
with open("gift-basket.png", "wb") as f:
f.write(image_bytes)Edit an image using a mask (inpainting)
You can provide a mask to indicate which part of the image should be edited.
When using a mask with GPT Image, additional instructions are sent to the model to help guide the editing process accordingly.
Unlike with DALL·E 2, masking with GPT Image is entirely prompt-based. This means the model uses the mask as guidance, but may not follow its exact shape with complete precision.
If you provide multiple input images, the mask will be applied to the first image.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
from openai import OpenAI
client = OpenAI()
fileId = create_file("sunlit_lounge.png")
maskId = create_file("mask.png")
response = client.responses.create(
model="gpt-4o",
input=[
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "generate an image of the same sunlit indoor lounge area with a pool but the pool should contain a flamingo",
},
{
"type": "input_image",
"file_id": fileId,
}
],
},
],
tools=[
{
"type": "image_generation",
"quality": "high",
"input_image_mask": {
"file_id": maskId,
}
},
],
)
image_data = [
output.result
for output in response.output
if output.type == "image_generation_call"
]
if image_data:
image_base64 = image_data[0]
with open("lounge.png", "wb") as f:
f.write(base64.b64decode(image_base64))1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from openai import OpenAI
client = OpenAI()
result = client.images.edit(
model="gpt-image-1",
image=open("sunlit_lounge.png", "rb"),
mask=open("mask.png", "rb"),
prompt="A sunlit indoor lounge area with a pool containing a flamingo"
)
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
# Save the image to a file
with open("composition.png", "wb") as f:
f.write(image_bytes)| Image | Mask | Output |
|---|---|---|
![]() | ![]() | ![]() |
Prompt: a sunlit indoor lounge area with a pool containing a flamingo
Mask requirements
The image to edit and mask must be of the same format and size (less than 50MB in size).
The mask image must also contain an alpha channel. If you’re using an image editing tool to create the mask, make sure to save the mask with an alpha channel.
The Image Edits endpoint lets you edit parts of an image by uploading an image and mask indicating which areas should be replaced. This process is also known as inpainting.
You can provide a mask to indicate where the image should be edited. The transparent areas of the mask will be replaced, while the filled areas will be left unchanged.
You should use the prompt to describe the full new image, not just the erased area.
1
2
3
4
5
6
7
8
9
10
11
12
13
from openai import OpenAI
client = OpenAI()
result = client.images.edit(
model="dall-e-2",
image=open("sunlit_lounge.png", "rb"),
mask=open("mask.png", "rb"),
prompt="A sunlit indoor lounge area with a pool containing a flamingo",
n=1,
size="1024x1024",
)
print(result.data[0].url)| Image | Mask | Output |
|---|---|---|
![]() | ![]() | ![]() |
Prompt: a sunlit indoor lounge area with a pool containing a flamingo
Mask requirements
The mask must be a square PNG image and less than 4MB in size.
The mask image must also contain an alpha channel. If you’re using an image editing tool to create the mask, make sure to save the mask with an alpha channel.
The Image Edits endpoint is not available for DALL·E 3. If you would like to edit images, we recommend using our newest model, GPT Image.
Image Variations
Available for DALL·E 2 only, the image variations endpoint allows you to generate a variation of a given image.
1
2
3
4
5
6
7
8
9
10
11
from openai import OpenAI
client = OpenAI()
result = client.images.create_variation(
model="dall-e-2",
image=open("corgi_and_cat_paw.png", "rb"),
n=1,
size="1024x1024"
)
print(result.data[0].url)| Image | Output |
|---|---|
![]() | ![]() |
Similar to the edits endpoint, the input image must be a square PNG image less than 4MB in size.
Input fidelity
GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini) support high input fidelity, which allows you to better preserve details from the input images in the output.
This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image.
You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that if using gpt-image-1 or gpt-image-1-mini, the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image.
If you are using gpt-image-1.5, the first 5 input images will be preserved with higher fidelity.
To enable high input fidelity, set the input_fidelity parameter to high. The default value is low.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
from openai import OpenAI
import base64
client = OpenAI()
response = client.responses.create(
model="gpt-4.1",
input=[
{
"role": "user",
"content": [
{"type": "input_text", "text": "Add the logo to the woman's top, as if stamped into the fabric."},
{
"type": "input_image",
"image_url": "https://cdn.openai.com/API/docs/images/woman_futuristic.jpg",
},
{
"type": "input_image",
"image_url": "https://cdn.openai.com/API/docs/images/brain_logo.png",
},
],
}
],
tools=[{"type": "image_generation", "input_fidelity": "high", "action": "edit"}],
)
# Extract the edited image
image_data = [
output.result
for output in response.output
if output.type == "image_generation_call"
]
if image_data:
image_base64 = image_data[0]
with open("woman_with_logo.png", "wb") as f:
f.write(base64.b64decode(image_base64))1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
from openai import OpenAI
import base64
client = OpenAI()
result = client.images.edit(
model="gpt-image-1",
image=[open("woman.jpg", "rb"), open("logo.png", "rb")],
prompt="Add the logo to the woman's top, as if stamped into the fabric.",
input_fidelity="high"
)
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
# Save the image to a file
with open("woman_with_logo.png", "wb") as f:
f.write(image_bytes)| Input 1 | Input 2 | Output |
|---|---|---|
![]() | ![]() | ![]() |
Prompt: Add the logo to the woman’s top, as if stamped into the fabric.
Keep in mind that when using high input fidelity, more image input tokens will be used per request. To understand the costs implications, refer to our vision costs section.
Customize Image Output
You can configure the following output options:
- Size: Image dimensions (e.g.,
1024x1024,1024x1536) - Quality: Rendering quality (e.g.
low,medium,high) - Format: File output format
- Compression: Compression level (0-100%) for JPEG and WebP formats
- Background: Transparent or opaque
size, quality, and background support the auto option, where the model will automatically select the best option based on the prompt.
- Size: Image dimensions (e.g.,
1024x1024,1024x1536) - Quality: Rendering quality (e.g.
standard) - Format:
url(default),b64_json
- Size: Image dimensions (e.g.,
1024x1024,1024x1536) - Quality: Rendering quality (e.g.
standard) - Format:
url(default),b64_json
Size and quality options
Square images with standard quality are the fastest to generate. The default size is 1024x1024 pixels.
| Available sizes |
|
| Quality options | - low - medium - high - auto (default) |
| Available sizes | - 256x256 - 512x512 - 1024x1024 (default) |
| Quality options | - standard (default) |
| Available sizes |
|
| Quality options | - standard (default) - hd |
Output format
The Image API returns base64-encoded image data.
The default format is png, but you can also request jpeg or webp.
If using jpeg or webp, you can also specify the output_compression parameter to control the compression level (0-100%). For example, output_compression=50 will compress the image by 50%.
Using jpeg is faster than png, so you should prioritize this format if
latency is a concern.
The default Image API output when using DALL·E 2 is a url pointing to the hosted image.
You can also request the response_format as b64_json for a base64-encoded image.
The default Image API output when using DALL·E 3 is a url pointing to the hosted image.
You can also request the response_format as b64_json for a base64-encoded image.
Transparency
GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini) support transparent backgrounds.
To enable transparency, set the background parameter to transparent.
It is only supported with the png and webp output formats.
Transparency works best when setting the quality to medium or high.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import openai
import base64
response = openai.responses.create(
model="gpt-5",
input="Draw a 2D pixel art style sprite sheet of a tabby gray cat",
tools=[
{
"type": "image_generation",
"background": "transparent",
"quality": "high",
}
],
)
image_data = [
output.result
for output in response.output
if output.type == "image_generation_call"
]
if image_data:
image_base64 = image_data[0]
with open("sprite.png", "wb") as f:
f.write(base64.b64decode(image_base64))1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
from openai import OpenAI
import base64
client = OpenAI()
result = client.images.generate(
model="gpt-image-1",
prompt="Draw a 2D pixel art style sprite sheet of a tabby gray cat",
size="1024x1024",
background="transparent",
quality="high",
)
image_base64 = result.json()["data"][0]["b64_json"]
image_bytes = base64.b64decode(image_base64)
# Save the image to a file
with open("sprite.png", "wb") as f:
f.write(image_bytes)Limitations
GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini) are powerful and versatile image generation models, but they still have some limitations to be aware of:
- Latency: Complex prompts may take up to 2 minutes to process.
- Text Rendering: Although significantly improved over the DALL·E series, the model can still struggle with precise text placement and clarity.
- Consistency: While capable of producing consistent imagery, the model may occasionally struggle to maintain visual consistency for recurring characters or brand elements across multiple generations.
- Composition Control: Despite improved instruction following, the model may have difficulty placing elements precisely in structured or layout-sensitive compositions.
Content Moderation
All prompts and generated images are filtered in accordance with our content policy.
For image generation using GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini), you can control moderation strictness with the moderation parameter. This parameter supports two values:
auto(default): Standard filtering that seeks to limit creating certain categories of potentially age-inappropriate content.low: Less restrictive filtering.
Supported models
When using image generation in the Responses API, most modern models starting with gpt-4o and newer should support the image generation tool. Check the model detail page for your model to confirm if your desired model can use the image generation tool.
DALL·E 2 is our first image generation model and therefore has significant limitations:
- Text Rendering: The model struggles with rendering legible text.
- Instruction Following: The model has trouble following instructions.
- Realism: The model is not able to generate realistic images.
For a better experience, we recommend using GPT Image for image generation.
DALL·E 3 is an improvement over DALL·E 2 but still has some limitations:
- Text Rendering: The model struggles with rendering legible text.
- Instruction Following: The model has trouble following precise instructions.
- Photorealism: The model is not able to generate highly photorealistic images.
For a better experience, we recommend using GPT Image for image generation.
Cost and latency
This model generates images by first producing specialized image tokens. Both latency and eventual cost are proportional to the number of tokens required to render an image—larger image sizes and higher quality settings result in more tokens.
The number of tokens generated depends on image dimensions and quality:
| Quality | Square (1024×1024) | Portrait (1024×1536) | Landscape (1536×1024) |
|---|---|---|---|
| Low | 272 tokens | 408 tokens | 400 tokens |
| Medium | 1056 tokens | 1584 tokens | 1568 tokens |
| High | 4160 tokens | 6240 tokens | 6208 tokens |
Note that you will also need to account for input tokens: text tokens for the prompt and image tokens for the input images if editing images. If you are using high input fidelity, the number of input tokens will be higher.
Refer to our pricing page for more information about price per text and image tokens.
So the final cost is the sum of:
- input text tokens
- input image tokens if using the edits endpoint
- image output tokens
Partial images cost
If you want to stream image generation using the partial_images parameter, each partial image will incur an additional 100 image output tokens.
Cost for DALL·E 2 is fixed can be calculated by image generated depending on the size.
You can find the pricing details on the pricing page.
Cost for DALL·E 3 is fixed can be calculated by image generated depending on the size and image quality.
You can find the pricing details on the pricing page.



































