Skip to content

Get a model response

client.responses.retrieve(stringresponseID, ResponseRetrieveParamsquery?, RequestOptionsoptions?): Response { id, created_at, error, 29 more } | Stream<ResponseStreamEvent>
GET/responses/{response_id}

Retrieves a model response with the given ID.

ParametersExpand Collapse
responseID: string
ResponseRetrieveParams = ResponseRetrieveParamsNonStreaming { stream } | ResponseRetrieveParamsStreaming { stream }
ResponseRetrieveParamsBase { include, include_obfuscation, starting_after, stream }
include?: Array<ResponseIncludable>

Additional fields to include in the response. See the include parameter for Response creation above for more information.

Accepts one of the following:
"file_search_call.results"
"web_search_call.results"
"web_search_call.action.sources"
"message.input_image.image_url"
"computer_call_output.output.image_url"
"code_interpreter_call.outputs"
"reasoning.encrypted_content"
"message.output_text.logprobs"
include_obfuscation?: boolean

When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API.

starting_after?: number

The sequence number of the event after which to start streaming.

stream?: false

If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.

ResponseRetrieveParamsNonStreaming extends ResponseRetrieveParamsBase { include, include_obfuscation, starting_after, stream } { stream }
stream?: false

If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.

ResponseRetrieveParamsNonStreaming extends ResponseRetrieveParamsBase { include, include_obfuscation, starting_after, stream } { stream }
stream?: false

If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.

ReturnsExpand Collapse
Response { id, created_at, error, 29 more }
id: string

Unique identifier for this Response.

created_at: number

Unix timestamp (in seconds) of when this Response was created.

error: ResponseError { code, message } | null

An error object returned when the model fails to generate a Response.

code: "server_error" | "rate_limit_exceeded" | "invalid_prompt" | 15 more

The error code for the response.

Accepts one of the following:
"server_error"
"rate_limit_exceeded"
"invalid_prompt"
"vector_store_timeout"
"invalid_image"
"invalid_image_format"
"invalid_base64_image"
"invalid_image_url"
"image_too_large"
"image_too_small"
"image_parse_error"
"image_content_policy_violation"
"invalid_image_mode"
"image_file_too_large"
"unsupported_image_media_type"
"empty_image_file"
"failed_to_download_image"
"image_file_not_found"
message: string

A human-readable description of the error.

incomplete_details: IncompleteDetails | null

Details about why the response is incomplete.

reason?: "max_output_tokens" | "content_filter"

The reason why the response is incomplete.

Accepts one of the following:
"max_output_tokens"
"content_filter"
instructions: string | Array<ResponseInputItem> | null

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Accepts one of the following:
string
EasyInputMessage { content, role, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: string | ResponseInputMessageContentList { , , }

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
string
ResponseInputMessageContentList = Array<ResponseInputContent>

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

Message { content, role, status, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "system" | "developer"

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status?: "in_progress" | "completed" | "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type?: "message"

The type of the message input. Always set to message.

ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ComputerCallOutput { call_id, output, type, 3 more }

The output of a computer tool call.

call_id: string

The ID of the computer tool call that produced the output.

maxLength64
minLength1
output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id?: string

The identifier of an uploaded file that contains the screenshot.

image_url?: string

The URL of the screenshot image.

type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

id?: string | null

The ID of the computer tool call output.

acknowledged_safety_checks?: Array<AcknowledgedSafetyCheck> | null

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
FunctionCallOutput { call_id, output, type, 2 more }

The output of a function tool call.

call_id: string

The unique ID of the function tool call generated by the model.

maxLength64
minLength1
output: string | ResponseFunctionCallOutputItemList { , , }

Text, image, or file output of the function tool call.

Accepts one of the following:
string
ResponseFunctionCallOutputItemList = Array<ResponseFunctionCallOutputItem>

An array of content outputs (text, image, file) for the function tool call.

Accepts one of the following:
ResponseInputTextContent { text, type }

A text input to the model.

text: string

The text input to the model.

maxLength10485760
type: "input_text"

The type of the input item. Always input_text.

ResponseInputImageContent { type, detail, file_id, image_url }

An image input to the model. Learn about image inputs

type: "input_image"

The type of the input item. Always input_image.

detail?: "low" | "high" | "auto" | null

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
ResponseInputFileContent { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string | null

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string | null

The URL of the file to be sent to the model.

filename?: string | null

The name of the file to be sent to the model.

type: "function_call_output"

The type of the function tool call output. Always function_call_output.

id?: string | null

The unique ID of the function tool call output. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItemParam { encrypted_content, type, id }

A compaction item generated by the v1/responses/compact API.

encrypted_content: string

The encrypted content of the compaction summary.

maxLength10485760
type: "compaction"

The type of the item. Always compaction.

id?: string | null

The ID of the compaction item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCall { action, call_id, type, 2 more }

A tool representing a request to execute one or more shell commands.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>

Ordered shell commands for the execution environment to run.

max_output_length?: number | null

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms?: number | null

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: "shell_call"

The type of the item. Always shell_call.

id?: string | null

The unique ID of the shell tool call. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCallOutput { call_id, output, type, 3 more }

The streamed output items emitted by a shell tool call.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
output: Array<ResponseFunctionShellCallOutputContent { outcome, stderr, stdout } >

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Timeout { type } | Exit { exit_code, type }

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

The exit code returned by the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

Captured stderr output for the shell call.

maxLength10485760
stdout: string

Captured stdout output for the shell call.

maxLength10485760
type: "shell_call_output"

The type of the item. Always shell_call_output.

id?: string | null

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length?: number | null

The maximum number of UTF-8 characters captured for this shell call's combined output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ApplyPatchCall { call_id, operation, status, 2 more }

A tool call representing a request to create, delete, or update files using diff patches.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction for creating a new file via the apply_patch tool.

diff: string

Unified diff content to apply when creating the file.

maxLength10485760
path: string

Path of the file to create relative to the workspace root.

minLength1
type: "create_file"

The operation type. Always create_file.

DeleteFile { path, type }

Instruction for deleting an existing file via the apply_patch tool.

path: string

Path of the file to delete relative to the workspace root.

minLength1
type: "delete_file"

The operation type. Always delete_file.

UpdateFile { diff, path, type }

Instruction for updating an existing file via the apply_patch tool.

diff: string

Unified diff content to apply to the existing file.

maxLength10485760
path: string

Path of the file to update relative to the workspace root.

minLength1
type: "update_file"

The operation type. Always update_file.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

id?: string | null

The unique ID of the apply patch tool call. Populated when this item is returned via API.

ApplyPatchCallOutput { call_id, status, type, 2 more }

The streamed output emitted by an apply patch tool call.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

id?: string | null

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output?: string | null

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse { approval_request_id, approve, type, 2 more }

A response to an MCP approval request.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

id?: string | null

The unique ID of the approval response

reason?: string | null

Optional reason for the decision.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
ResponseCustomToolCallOutput { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string | Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
string
Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id?: string

The unique ID of the custom tool call output in the OpenAI platform.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

ItemReference { id, type }

An internal identifier for an item to reference.

id: string

The ID of the item to reference.

type?: "item_reference" | null

The type of item to reference. Always item_reference.

metadata: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Accepts one of the following:
(string & {})
ChatModel = "gpt-5.2" | "gpt-5.2-2025-12-11" | "gpt-5.2-chat-latest" | 69 more
Accepts one of the following:
"gpt-5.2"
"gpt-5.2-2025-12-11"
"gpt-5.2-chat-latest"
"gpt-5.2-pro"
"gpt-5.2-pro-2025-12-11"
"gpt-5.1"
"gpt-5.1-2025-11-13"
"gpt-5.1-codex"
"gpt-5.1-mini"
"gpt-5.1-chat-latest"
"gpt-5"
"gpt-5-mini"
"gpt-5-nano"
"gpt-5-2025-08-07"
"gpt-5-mini-2025-08-07"
"gpt-5-nano-2025-08-07"
"gpt-5-chat-latest"
"gpt-4.1"
"gpt-4.1-mini"
"gpt-4.1-nano"
"gpt-4.1-2025-04-14"
"gpt-4.1-mini-2025-04-14"
"gpt-4.1-nano-2025-04-14"
"o4-mini"
"o4-mini-2025-04-16"
"o3"
"o3-2025-04-16"
"o3-mini"
"o3-mini-2025-01-31"
"o1"
"o1-2024-12-17"
"o1-preview"
"o1-preview-2024-09-12"
"o1-mini"
"o1-mini-2024-09-12"
"gpt-4o"
"gpt-4o-2024-11-20"
"gpt-4o-2024-08-06"
"gpt-4o-2024-05-13"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-audio-preview-2025-06-03"
"gpt-4o-mini-audio-preview"
"gpt-4o-mini-audio-preview-2024-12-17"
"gpt-4o-search-preview"
"gpt-4o-mini-search-preview"
"gpt-4o-search-preview-2025-03-11"
"gpt-4o-mini-search-preview-2025-03-11"
"chatgpt-4o-latest"
"codex-mini-latest"
"gpt-4o-mini"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-0125-preview"
"gpt-4-turbo-preview"
"gpt-4-1106-preview"
"gpt-4-vision-preview"
"gpt-4"
"gpt-4-0314"
"gpt-4-0613"
"gpt-4-32k"
"gpt-4-32k-0314"
"gpt-4-32k-0613"
"gpt-3.5-turbo"
"gpt-3.5-turbo-16k"
"gpt-3.5-turbo-0301"
"gpt-3.5-turbo-0613"
"gpt-3.5-turbo-1106"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo-16k-0613"
"o1-pro" | "o1-pro-2025-03-19" | "o3-pro" | 11 more
"o1-pro"
"o1-pro-2025-03-19"
"o3-pro"
"o3-pro-2025-06-10"
"o3-deep-research"
"o3-deep-research-2025-06-26"
"o4-mini-deep-research"
"o4-mini-deep-research-2025-06-26"
"computer-use-preview"
"computer-use-preview-2025-03-11"
"gpt-5-codex"
"gpt-5-pro"
"gpt-5-pro-2025-10-06"
"gpt-5.1-codex-max"
object: "response"

The object type of this resource - always set to response.

output: Array<ResponseOutputItem>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

parallel_tool_calls: boolean

Whether to allow the model to run tool calls in parallel.

temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: ToolChoiceOptions | ToolChoiceAllowed { mode, tools, type } | ToolChoiceTypes { type } | 5 more

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

Accepts one of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
ToolChoiceAllowed { mode, tools, type }

Constrains the tools available to the model to a pre-defined set.

mode: "auto" | "required"

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: Array<Record<string, unknown>>

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: "allowed_tools"

Allowed tool configuration type. Always allowed_tools.

ToolChoiceTypes { type }

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: "file_search" | "web_search_preview" | "computer_use_preview" | 3 more

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

ToolChoiceCustom { name, type }

Use this option to force the model to call a specific custom tool.

name: string

The name of the custom tool to call.

type: "custom"

For custom tool calling, the type is always custom.

ToolChoiceApplyPatch { type }

Forces the model to call the apply_patch tool when executing a tool call.

type: "apply_patch"

The tool to call. Always apply_patch.

ToolChoiceShell { type }

Forces the model to call the shell tool when a tool call is required.

type: "shell"

The tool to call. Always shell.

tools: Array<Tool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Accepts one of the following:
FunctionTool { name, parameters, strict, 2 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: Record<string, unknown> | null

A JSON schema object describing the parameters of the function.

strict: boolean | null

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

description?: string | null

A description of the function. Used by the model to determine whether or not to call the function.

FileSearchTool { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: Array<string>

The IDs of the vector stores to search.

filters?: ComparisonFilter { key, type, value } | CompoundFilter { filters, type } | null

A filter to apply.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
CompoundFilter { filters, type }

Combine multiple filters using and or or.

filters: Array<ComparisonFilter { key, type, value } | unknown>

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
unknown
type: "and" | "or"

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results?: number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: RankingOptions { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker?: "auto" | "default-2024-11-15"

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold?: number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

ComputerTool { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" | "mac" | "linux" | 2 more

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearchTool { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" | "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters?: Filters | null

Filters for the search.

allowed_domains?: Array<string> | null

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The approximate location of the user.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: "approximate"

The type of location approximation. Always approximate.

Mcp { server_label, type, allowed_tools, 6 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

Accepts one of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

Accepts one of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string | CodeInterpreterToolAuto { type, file_ids, memory_limit }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
string
CodeInterpreterToolAuto { type, file_ids, memory_limit }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids?: Array<string>

An optional list of uploaded files to make available to your code.

memory_limit?: "1g" | "4g" | "16g" | "64g" | null

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action?: "generate" | "edit" | "auto"

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background?: "transparent" | "opaque" | "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity?: "high" | "low" | null

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask?: InputImageMask { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: string

File ID for the mask image.

image_url?: string

Base64-encoded mask image.

model?: (string & {}) | "gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
(string & {})
"gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation?: "auto" | "low"

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression?: number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format?: "png" | "webp" | "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images?: number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality?: "low" | "medium" | "high" | "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size?: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

FunctionShellTool { type }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

CustomTool { name, type, description, format }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

description?: string

Optional description of the custom tool, used to provide more context.

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
Text { type }

Unconstrained free-form text.

type: "text"

Unconstrained text format. Always text.

Grammar { definition, syntax, type }

A grammar defined by the user.

definition: string

The grammar definition.

syntax: "lark" | "regex"

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: "grammar"

Grammar format. Always grammar.

WebSearchPreviewTool { type, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" | "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatchTool { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
background?: boolean | null

Whether to run the model response in the background. Learn more.

completed_at?: number | null

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

conversation?: Conversation | null

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: string

The unique ID of the conversation that this response was associated with.

max_output_tokens?: number | null

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: number | null

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

previous_response_id?: string | null

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

id: string

The unique identifier of the prompt template to use.

variables?: Record<string, string | ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } > | null

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

version?: string | null

Optional version of the prompt template.

prompt_cache_key?: string

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: "in-memory" | "24h" | null

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

Accepts one of the following:
"in-memory"
"24h"
reasoning?: Reasoning { effort, generate_summary, summary } | null

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary?: "auto" | "concise" | "detailed" | null

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary?: "auto" | "concise" | "detailed" | null

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
safety_identifier?: string

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier?: "auto" | "default" | "flex" | 2 more | null

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

Accepts one of the following:
"auto"
"default"
"flex"
"scale"
"priority"

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

Accepts one of the following:
"completed"
"failed"
"in_progress"
"cancelled"
"queued"
"incomplete"
text?: ResponseTextConfig { format, verbosity }

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
ResponseFormatText { type }

Default response format. Used to generate text responses.

type: "text"

The type of response format being defined. Always text.

ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: string

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Record<string, unknown>

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: "json_schema"

The type of response format being defined. Always json_schema.

description?: string

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: boolean | null

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: "json_object"

The type of response format being defined. Always json_object.

verbosity?: "low" | "medium" | "high" | null

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
top_logprobs?: number | null

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

minimum0
maximum20
truncation?: "auto" | "disabled" | null

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Accepts one of the following:
"auto"
"disabled"
usage?: ResponseUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Deprecateduser?: string

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

ResponseStreamEvent = ResponseAudioDeltaEvent { delta, sequence_number, type } | ResponseAudioDoneEvent { sequence_number, type } | ResponseAudioTranscriptDeltaEvent { delta, sequence_number, type } | 50 more

Emitted when there is a partial audio response.

Accepts one of the following:
ResponseAudioDeltaEvent { delta, sequence_number, type }

Emitted when there is a partial audio response.

delta: string

A chunk of Base64 encoded response audio bytes.

sequence_number: number

A sequence number for this chunk of the stream response.

type: "response.audio.delta"

The type of the event. Always response.audio.delta.

ResponseAudioDoneEvent { sequence_number, type }

Emitted when the audio response is complete.

sequence_number: number

The sequence number of the delta.

type: "response.audio.done"

The type of the event. Always response.audio.done.

ResponseAudioTranscriptDeltaEvent { delta, sequence_number, type }

Emitted when there is a partial transcript of audio.

delta: string

The partial transcript of the audio response.

sequence_number: number

The sequence number of this event.

type: "response.audio.transcript.delta"

The type of the event. Always response.audio.transcript.delta.

ResponseAudioTranscriptDoneEvent { sequence_number, type }

Emitted when the full audio transcript is completed.

sequence_number: number

The sequence number of this event.

type: "response.audio.transcript.done"

The type of the event. Always response.audio.transcript.done.

ResponseCodeInterpreterCallCodeDeltaEvent { delta, item_id, output_index, 2 more }

Emitted when a partial code snippet is streamed by the code interpreter.

delta: string

The partial code snippet being streamed by the code interpreter.

item_id: string

The unique identifier of the code interpreter tool call item.

output_index: number

The index of the output item in the response for which the code is being streamed.

sequence_number: number

The sequence number of this event, used to order streaming events.

type: "response.code_interpreter_call_code.delta"

The type of the event. Always response.code_interpreter_call_code.delta.

ResponseCodeInterpreterCallCodeDoneEvent { code, item_id, output_index, 2 more }

Emitted when the code snippet is finalized by the code interpreter.

code: string

The final code snippet output by the code interpreter.

item_id: string

The unique identifier of the code interpreter tool call item.

output_index: number

The index of the output item in the response for which the code is finalized.

sequence_number: number

The sequence number of this event, used to order streaming events.

type: "response.code_interpreter_call_code.done"

The type of the event. Always response.code_interpreter_call_code.done.

ResponseCodeInterpreterCallCompletedEvent { item_id, output_index, sequence_number, type }

Emitted when the code interpreter call is completed.

item_id: string

The unique identifier of the code interpreter tool call item.

output_index: number

The index of the output item in the response for which the code interpreter call is completed.

sequence_number: number

The sequence number of this event, used to order streaming events.

type: "response.code_interpreter_call.completed"

The type of the event. Always response.code_interpreter_call.completed.

ResponseCodeInterpreterCallInProgressEvent { item_id, output_index, sequence_number, type }

Emitted when a code interpreter call is in progress.

item_id: string

The unique identifier of the code interpreter tool call item.

output_index: number

The index of the output item in the response for which the code interpreter call is in progress.

sequence_number: number

The sequence number of this event, used to order streaming events.

type: "response.code_interpreter_call.in_progress"

The type of the event. Always response.code_interpreter_call.in_progress.

ResponseCodeInterpreterCallInterpretingEvent { item_id, output_index, sequence_number, type }

Emitted when the code interpreter is actively interpreting the code snippet.

item_id: string

The unique identifier of the code interpreter tool call item.

output_index: number

The index of the output item in the response for which the code interpreter is interpreting code.

sequence_number: number

The sequence number of this event, used to order streaming events.

type: "response.code_interpreter_call.interpreting"

The type of the event. Always response.code_interpreter_call.interpreting.

ResponseCompletedEvent { response, sequence_number, type }

Emitted when the model response is complete.

response: Response { id, created_at, error, 29 more }

Properties of the completed response.

id: string

Unique identifier for this Response.

created_at: number

Unix timestamp (in seconds) of when this Response was created.

error: ResponseError { code, message } | null

An error object returned when the model fails to generate a Response.

code: "server_error" | "rate_limit_exceeded" | "invalid_prompt" | 15 more

The error code for the response.

Accepts one of the following:
"server_error"
"rate_limit_exceeded"
"invalid_prompt"
"vector_store_timeout"
"invalid_image"
"invalid_image_format"
"invalid_base64_image"
"invalid_image_url"
"image_too_large"
"image_too_small"
"image_parse_error"
"image_content_policy_violation"
"invalid_image_mode"
"image_file_too_large"
"unsupported_image_media_type"
"empty_image_file"
"failed_to_download_image"
"image_file_not_found"
message: string

A human-readable description of the error.

incomplete_details: IncompleteDetails | null

Details about why the response is incomplete.

reason?: "max_output_tokens" | "content_filter"

The reason why the response is incomplete.

Accepts one of the following:
"max_output_tokens"
"content_filter"
instructions: string | Array<ResponseInputItem> | null

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Accepts one of the following:
string
EasyInputMessage { content, role, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: string | ResponseInputMessageContentList { , , }

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
string
ResponseInputMessageContentList = Array<ResponseInputContent>

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

Message { content, role, status, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "system" | "developer"

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status?: "in_progress" | "completed" | "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type?: "message"

The type of the message input. Always set to message.

ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ComputerCallOutput { call_id, output, type, 3 more }

The output of a computer tool call.

call_id: string

The ID of the computer tool call that produced the output.

maxLength64
minLength1
output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id?: string

The identifier of an uploaded file that contains the screenshot.

image_url?: string

The URL of the screenshot image.

type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

id?: string | null

The ID of the computer tool call output.

acknowledged_safety_checks?: Array<AcknowledgedSafetyCheck> | null

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
FunctionCallOutput { call_id, output, type, 2 more }

The output of a function tool call.

call_id: string

The unique ID of the function tool call generated by the model.

maxLength64
minLength1
output: string | ResponseFunctionCallOutputItemList { , , }

Text, image, or file output of the function tool call.

Accepts one of the following:
string
ResponseFunctionCallOutputItemList = Array<ResponseFunctionCallOutputItem>

An array of content outputs (text, image, file) for the function tool call.

Accepts one of the following:
ResponseInputTextContent { text, type }

A text input to the model.

text: string

The text input to the model.

maxLength10485760
type: "input_text"

The type of the input item. Always input_text.

ResponseInputImageContent { type, detail, file_id, image_url }

An image input to the model. Learn about image inputs

type: "input_image"

The type of the input item. Always input_image.

detail?: "low" | "high" | "auto" | null

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
ResponseInputFileContent { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string | null

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string | null

The URL of the file to be sent to the model.

filename?: string | null

The name of the file to be sent to the model.

type: "function_call_output"

The type of the function tool call output. Always function_call_output.

id?: string | null

The unique ID of the function tool call output. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItemParam { encrypted_content, type, id }

A compaction item generated by the v1/responses/compact API.

encrypted_content: string

The encrypted content of the compaction summary.

maxLength10485760
type: "compaction"

The type of the item. Always compaction.

id?: string | null

The ID of the compaction item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCall { action, call_id, type, 2 more }

A tool representing a request to execute one or more shell commands.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>

Ordered shell commands for the execution environment to run.

max_output_length?: number | null

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms?: number | null

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: "shell_call"

The type of the item. Always shell_call.

id?: string | null

The unique ID of the shell tool call. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCallOutput { call_id, output, type, 3 more }

The streamed output items emitted by a shell tool call.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
output: Array<ResponseFunctionShellCallOutputContent { outcome, stderr, stdout } >

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Timeout { type } | Exit { exit_code, type }

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

The exit code returned by the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

Captured stderr output for the shell call.

maxLength10485760
stdout: string

Captured stdout output for the shell call.

maxLength10485760
type: "shell_call_output"

The type of the item. Always shell_call_output.

id?: string | null

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length?: number | null

The maximum number of UTF-8 characters captured for this shell call's combined output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ApplyPatchCall { call_id, operation, status, 2 more }

A tool call representing a request to create, delete, or update files using diff patches.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction for creating a new file via the apply_patch tool.

diff: string

Unified diff content to apply when creating the file.

maxLength10485760
path: string

Path of the file to create relative to the workspace root.

minLength1
type: "create_file"

The operation type. Always create_file.

DeleteFile { path, type }

Instruction for deleting an existing file via the apply_patch tool.

path: string

Path of the file to delete relative to the workspace root.

minLength1
type: "delete_file"

The operation type. Always delete_file.

UpdateFile { diff, path, type }

Instruction for updating an existing file via the apply_patch tool.

diff: string

Unified diff content to apply to the existing file.

maxLength10485760
path: string

Path of the file to update relative to the workspace root.

minLength1
type: "update_file"

The operation type. Always update_file.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

id?: string | null

The unique ID of the apply patch tool call. Populated when this item is returned via API.

ApplyPatchCallOutput { call_id, status, type, 2 more }

The streamed output emitted by an apply patch tool call.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

id?: string | null

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output?: string | null

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse { approval_request_id, approve, type, 2 more }

A response to an MCP approval request.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

id?: string | null

The unique ID of the approval response

reason?: string | null

Optional reason for the decision.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
ResponseCustomToolCallOutput { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string | Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
string
Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id?: string

The unique ID of the custom tool call output in the OpenAI platform.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

ItemReference { id, type }

An internal identifier for an item to reference.

id: string

The ID of the item to reference.

type?: "item_reference" | null

The type of item to reference. Always item_reference.

metadata: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Accepts one of the following:
(string & {})
ChatModel = "gpt-5.2" | "gpt-5.2-2025-12-11" | "gpt-5.2-chat-latest" | 69 more
Accepts one of the following:
"gpt-5.2"
"gpt-5.2-2025-12-11"
"gpt-5.2-chat-latest"
"gpt-5.2-pro"
"gpt-5.2-pro-2025-12-11"
"gpt-5.1"
"gpt-5.1-2025-11-13"
"gpt-5.1-codex"
"gpt-5.1-mini"
"gpt-5.1-chat-latest"
"gpt-5"
"gpt-5-mini"
"gpt-5-nano"
"gpt-5-2025-08-07"
"gpt-5-mini-2025-08-07"
"gpt-5-nano-2025-08-07"
"gpt-5-chat-latest"
"gpt-4.1"
"gpt-4.1-mini"
"gpt-4.1-nano"
"gpt-4.1-2025-04-14"
"gpt-4.1-mini-2025-04-14"
"gpt-4.1-nano-2025-04-14"
"o4-mini"
"o4-mini-2025-04-16"
"o3"
"o3-2025-04-16"
"o3-mini"
"o3-mini-2025-01-31"
"o1"
"o1-2024-12-17"
"o1-preview"
"o1-preview-2024-09-12"
"o1-mini"
"o1-mini-2024-09-12"
"gpt-4o"
"gpt-4o-2024-11-20"
"gpt-4o-2024-08-06"
"gpt-4o-2024-05-13"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-audio-preview-2025-06-03"
"gpt-4o-mini-audio-preview"
"gpt-4o-mini-audio-preview-2024-12-17"
"gpt-4o-search-preview"
"gpt-4o-mini-search-preview"
"gpt-4o-search-preview-2025-03-11"
"gpt-4o-mini-search-preview-2025-03-11"
"chatgpt-4o-latest"
"codex-mini-latest"
"gpt-4o-mini"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-0125-preview"
"gpt-4-turbo-preview"
"gpt-4-1106-preview"
"gpt-4-vision-preview"
"gpt-4"
"gpt-4-0314"
"gpt-4-0613"
"gpt-4-32k"
"gpt-4-32k-0314"
"gpt-4-32k-0613"
"gpt-3.5-turbo"
"gpt-3.5-turbo-16k"
"gpt-3.5-turbo-0301"
"gpt-3.5-turbo-0613"
"gpt-3.5-turbo-1106"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo-16k-0613"
"o1-pro" | "o1-pro-2025-03-19" | "o3-pro" | 11 more
"o1-pro"
"o1-pro-2025-03-19"
"o3-pro"
"o3-pro-2025-06-10"
"o3-deep-research"
"o3-deep-research-2025-06-26"
"o4-mini-deep-research"
"o4-mini-deep-research-2025-06-26"
"computer-use-preview"
"computer-use-preview-2025-03-11"
"gpt-5-codex"
"gpt-5-pro"
"gpt-5-pro-2025-10-06"
"gpt-5.1-codex-max"
object: "response"

The object type of this resource - always set to response.

output: Array<ResponseOutputItem>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

parallel_tool_calls: boolean

Whether to allow the model to run tool calls in parallel.

temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: ToolChoiceOptions | ToolChoiceAllowed { mode, tools, type } | ToolChoiceTypes { type } | 5 more

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

Accepts one of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
ToolChoiceAllowed { mode, tools, type }

Constrains the tools available to the model to a pre-defined set.

mode: "auto" | "required"

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: Array<Record<string, unknown>>

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: "allowed_tools"

Allowed tool configuration type. Always allowed_tools.

ToolChoiceTypes { type }

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: "file_search" | "web_search_preview" | "computer_use_preview" | 3 more

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

ToolChoiceCustom { name, type }

Use this option to force the model to call a specific custom tool.

name: string

The name of the custom tool to call.

type: "custom"

For custom tool calling, the type is always custom.

ToolChoiceApplyPatch { type }

Forces the model to call the apply_patch tool when executing a tool call.

type: "apply_patch"

The tool to call. Always apply_patch.

ToolChoiceShell { type }

Forces the model to call the shell tool when a tool call is required.

type: "shell"

The tool to call. Always shell.

tools: Array<Tool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Accepts one of the following:
FunctionTool { name, parameters, strict, 2 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: Record<string, unknown> | null

A JSON schema object describing the parameters of the function.

strict: boolean | null

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

description?: string | null

A description of the function. Used by the model to determine whether or not to call the function.

FileSearchTool { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: Array<string>

The IDs of the vector stores to search.

filters?: ComparisonFilter { key, type, value } | CompoundFilter { filters, type } | null

A filter to apply.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
CompoundFilter { filters, type }

Combine multiple filters using and or or.

filters: Array<ComparisonFilter { key, type, value } | unknown>

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
unknown
type: "and" | "or"

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results?: number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: RankingOptions { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker?: "auto" | "default-2024-11-15"

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold?: number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

ComputerTool { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" | "mac" | "linux" | 2 more

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearchTool { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" | "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters?: Filters | null

Filters for the search.

allowed_domains?: Array<string> | null

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The approximate location of the user.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: "approximate"

The type of location approximation. Always approximate.

Mcp { server_label, type, allowed_tools, 6 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

Accepts one of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

Accepts one of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string | CodeInterpreterToolAuto { type, file_ids, memory_limit }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
string
CodeInterpreterToolAuto { type, file_ids, memory_limit }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids?: Array<string>

An optional list of uploaded files to make available to your code.

memory_limit?: "1g" | "4g" | "16g" | "64g" | null

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action?: "generate" | "edit" | "auto"

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background?: "transparent" | "opaque" | "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity?: "high" | "low" | null

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask?: InputImageMask { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: string

File ID for the mask image.

image_url?: string

Base64-encoded mask image.

model?: (string & {}) | "gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
(string & {})
"gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation?: "auto" | "low"

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression?: number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format?: "png" | "webp" | "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images?: number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality?: "low" | "medium" | "high" | "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size?: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

FunctionShellTool { type }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

CustomTool { name, type, description, format }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

description?: string

Optional description of the custom tool, used to provide more context.

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
Text { type }

Unconstrained free-form text.

type: "text"

Unconstrained text format. Always text.

Grammar { definition, syntax, type }

A grammar defined by the user.

definition: string

The grammar definition.

syntax: "lark" | "regex"

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: "grammar"

Grammar format. Always grammar.

WebSearchPreviewTool { type, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" | "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatchTool { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
background?: boolean | null

Whether to run the model response in the background. Learn more.

completed_at?: number | null

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

conversation?: Conversation | null

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: string

The unique ID of the conversation that this response was associated with.

max_output_tokens?: number | null

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: number | null

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

previous_response_id?: string | null

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

id: string

The unique identifier of the prompt template to use.

variables?: Record<string, string | ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } > | null

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

version?: string | null

Optional version of the prompt template.

prompt_cache_key?: string

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: "in-memory" | "24h" | null

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

Accepts one of the following:
"in-memory"
"24h"
reasoning?: Reasoning { effort, generate_summary, summary } | null

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary?: "auto" | "concise" | "detailed" | null

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary?: "auto" | "concise" | "detailed" | null

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
safety_identifier?: string

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier?: "auto" | "default" | "flex" | 2 more | null

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

Accepts one of the following:
"auto"
"default"
"flex"
"scale"
"priority"

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

Accepts one of the following:
"completed"
"failed"
"in_progress"
"cancelled"
"queued"
"incomplete"
text?: ResponseTextConfig { format, verbosity }

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
ResponseFormatText { type }

Default response format. Used to generate text responses.

type: "text"

The type of response format being defined. Always text.

ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: string

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Record<string, unknown>

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: "json_schema"

The type of response format being defined. Always json_schema.

description?: string

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: boolean | null

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: "json_object"

The type of response format being defined. Always json_object.

verbosity?: "low" | "medium" | "high" | null

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
top_logprobs?: number | null

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

minimum0
maximum20
truncation?: "auto" | "disabled" | null

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Accepts one of the following:
"auto"
"disabled"
usage?: ResponseUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Deprecateduser?: string

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

sequence_number: number

The sequence number for this event.

type: "response.completed"

The type of the event. Always response.completed.

ResponseContentPartAddedEvent { content_index, item_id, output_index, 3 more }

Emitted when a new content part is added.

content_index: number

The index of the content part that was added.

item_id: string

The ID of the output item that the content part was added to.

output_index: number

The index of the output item that the content part was added to.

part: ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } | ReasoningText { text, type }

The content part that was added.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

ReasoningText { text, type }

Reasoning text from the model.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

sequence_number: number

The sequence number of this event.

type: "response.content_part.added"

The type of the event. Always response.content_part.added.

ResponseContentPartDoneEvent { content_index, item_id, output_index, 3 more }

Emitted when a content part is done.

content_index: number

The index of the content part that is done.

item_id: string

The ID of the output item that the content part was added to.

output_index: number

The index of the output item that the content part was added to.

part: ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } | ReasoningText { text, type }

The content part that is done.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

ReasoningText { text, type }

Reasoning text from the model.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

sequence_number: number

The sequence number of this event.

type: "response.content_part.done"

The type of the event. Always response.content_part.done.

ResponseCreatedEvent { response, sequence_number, type }

An event that is emitted when a response is created.

response: Response { id, created_at, error, 29 more }

The response that was created.

id: string

Unique identifier for this Response.

created_at: number

Unix timestamp (in seconds) of when this Response was created.

error: ResponseError { code, message } | null

An error object returned when the model fails to generate a Response.

code: "server_error" | "rate_limit_exceeded" | "invalid_prompt" | 15 more

The error code for the response.

Accepts one of the following:
"server_error"
"rate_limit_exceeded"
"invalid_prompt"
"vector_store_timeout"
"invalid_image"
"invalid_image_format"
"invalid_base64_image"
"invalid_image_url"
"image_too_large"
"image_too_small"
"image_parse_error"
"image_content_policy_violation"
"invalid_image_mode"
"image_file_too_large"
"unsupported_image_media_type"
"empty_image_file"
"failed_to_download_image"
"image_file_not_found"
message: string

A human-readable description of the error.

incomplete_details: IncompleteDetails | null

Details about why the response is incomplete.

reason?: "max_output_tokens" | "content_filter"

The reason why the response is incomplete.

Accepts one of the following:
"max_output_tokens"
"content_filter"
instructions: string | Array<ResponseInputItem> | null

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Accepts one of the following:
string
EasyInputMessage { content, role, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: string | ResponseInputMessageContentList { , , }

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
string
ResponseInputMessageContentList = Array<ResponseInputContent>

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

Message { content, role, status, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "system" | "developer"

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status?: "in_progress" | "completed" | "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type?: "message"

The type of the message input. Always set to message.

ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ComputerCallOutput { call_id, output, type, 3 more }

The output of a computer tool call.

call_id: string

The ID of the computer tool call that produced the output.

maxLength64
minLength1
output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id?: string

The identifier of an uploaded file that contains the screenshot.

image_url?: string

The URL of the screenshot image.

type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

id?: string | null

The ID of the computer tool call output.

acknowledged_safety_checks?: Array<AcknowledgedSafetyCheck> | null

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
FunctionCallOutput { call_id, output, type, 2 more }

The output of a function tool call.

call_id: string

The unique ID of the function tool call generated by the model.

maxLength64
minLength1
output: string | ResponseFunctionCallOutputItemList { , , }

Text, image, or file output of the function tool call.

Accepts one of the following:
string
ResponseFunctionCallOutputItemList = Array<ResponseFunctionCallOutputItem>

An array of content outputs (text, image, file) for the function tool call.

Accepts one of the following:
ResponseInputTextContent { text, type }

A text input to the model.

text: string

The text input to the model.

maxLength10485760
type: "input_text"

The type of the input item. Always input_text.

ResponseInputImageContent { type, detail, file_id, image_url }

An image input to the model. Learn about image inputs

type: "input_image"

The type of the input item. Always input_image.

detail?: "low" | "high" | "auto" | null

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
ResponseInputFileContent { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string | null

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string | null

The URL of the file to be sent to the model.

filename?: string | null

The name of the file to be sent to the model.

type: "function_call_output"

The type of the function tool call output. Always function_call_output.

id?: string | null

The unique ID of the function tool call output. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItemParam { encrypted_content, type, id }

A compaction item generated by the v1/responses/compact API.

encrypted_content: string

The encrypted content of the compaction summary.

maxLength10485760
type: "compaction"

The type of the item. Always compaction.

id?: string | null

The ID of the compaction item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCall { action, call_id, type, 2 more }

A tool representing a request to execute one or more shell commands.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>

Ordered shell commands for the execution environment to run.

max_output_length?: number | null

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms?: number | null

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: "shell_call"

The type of the item. Always shell_call.

id?: string | null

The unique ID of the shell tool call. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCallOutput { call_id, output, type, 3 more }

The streamed output items emitted by a shell tool call.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
output: Array<ResponseFunctionShellCallOutputContent { outcome, stderr, stdout } >

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Timeout { type } | Exit { exit_code, type }

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

The exit code returned by the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

Captured stderr output for the shell call.

maxLength10485760
stdout: string

Captured stdout output for the shell call.

maxLength10485760
type: "shell_call_output"

The type of the item. Always shell_call_output.

id?: string | null

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length?: number | null

The maximum number of UTF-8 characters captured for this shell call's combined output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ApplyPatchCall { call_id, operation, status, 2 more }

A tool call representing a request to create, delete, or update files using diff patches.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction for creating a new file via the apply_patch tool.

diff: string

Unified diff content to apply when creating the file.

maxLength10485760
path: string

Path of the file to create relative to the workspace root.

minLength1
type: "create_file"

The operation type. Always create_file.

DeleteFile { path, type }

Instruction for deleting an existing file via the apply_patch tool.

path: string

Path of the file to delete relative to the workspace root.

minLength1
type: "delete_file"

The operation type. Always delete_file.

UpdateFile { diff, path, type }

Instruction for updating an existing file via the apply_patch tool.

diff: string

Unified diff content to apply to the existing file.

maxLength10485760
path: string

Path of the file to update relative to the workspace root.

minLength1
type: "update_file"

The operation type. Always update_file.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

id?: string | null

The unique ID of the apply patch tool call. Populated when this item is returned via API.

ApplyPatchCallOutput { call_id, status, type, 2 more }

The streamed output emitted by an apply patch tool call.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

id?: string | null

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output?: string | null

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse { approval_request_id, approve, type, 2 more }

A response to an MCP approval request.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

id?: string | null

The unique ID of the approval response

reason?: string | null

Optional reason for the decision.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
ResponseCustomToolCallOutput { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string | Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
string
Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id?: string

The unique ID of the custom tool call output in the OpenAI platform.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

ItemReference { id, type }

An internal identifier for an item to reference.

id: string

The ID of the item to reference.

type?: "item_reference" | null

The type of item to reference. Always item_reference.

metadata: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Accepts one of the following:
(string & {})
ChatModel = "gpt-5.2" | "gpt-5.2-2025-12-11" | "gpt-5.2-chat-latest" | 69 more
Accepts one of the following:
"gpt-5.2"
"gpt-5.2-2025-12-11"
"gpt-5.2-chat-latest"
"gpt-5.2-pro"
"gpt-5.2-pro-2025-12-11"
"gpt-5.1"
"gpt-5.1-2025-11-13"
"gpt-5.1-codex"
"gpt-5.1-mini"
"gpt-5.1-chat-latest"
"gpt-5"
"gpt-5-mini"
"gpt-5-nano"
"gpt-5-2025-08-07"
"gpt-5-mini-2025-08-07"
"gpt-5-nano-2025-08-07"
"gpt-5-chat-latest"
"gpt-4.1"
"gpt-4.1-mini"
"gpt-4.1-nano"
"gpt-4.1-2025-04-14"
"gpt-4.1-mini-2025-04-14"
"gpt-4.1-nano-2025-04-14"
"o4-mini"
"o4-mini-2025-04-16"
"o3"
"o3-2025-04-16"
"o3-mini"
"o3-mini-2025-01-31"
"o1"
"o1-2024-12-17"
"o1-preview"
"o1-preview-2024-09-12"
"o1-mini"
"o1-mini-2024-09-12"
"gpt-4o"
"gpt-4o-2024-11-20"
"gpt-4o-2024-08-06"
"gpt-4o-2024-05-13"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-audio-preview-2025-06-03"
"gpt-4o-mini-audio-preview"
"gpt-4o-mini-audio-preview-2024-12-17"
"gpt-4o-search-preview"
"gpt-4o-mini-search-preview"
"gpt-4o-search-preview-2025-03-11"
"gpt-4o-mini-search-preview-2025-03-11"
"chatgpt-4o-latest"
"codex-mini-latest"
"gpt-4o-mini"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-0125-preview"
"gpt-4-turbo-preview"
"gpt-4-1106-preview"
"gpt-4-vision-preview"
"gpt-4"
"gpt-4-0314"
"gpt-4-0613"
"gpt-4-32k"
"gpt-4-32k-0314"
"gpt-4-32k-0613"
"gpt-3.5-turbo"
"gpt-3.5-turbo-16k"
"gpt-3.5-turbo-0301"
"gpt-3.5-turbo-0613"
"gpt-3.5-turbo-1106"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo-16k-0613"
"o1-pro" | "o1-pro-2025-03-19" | "o3-pro" | 11 more
"o1-pro"
"o1-pro-2025-03-19"
"o3-pro"
"o3-pro-2025-06-10"
"o3-deep-research"
"o3-deep-research-2025-06-26"
"o4-mini-deep-research"
"o4-mini-deep-research-2025-06-26"
"computer-use-preview"
"computer-use-preview-2025-03-11"
"gpt-5-codex"
"gpt-5-pro"
"gpt-5-pro-2025-10-06"
"gpt-5.1-codex-max"
object: "response"

The object type of this resource - always set to response.

output: Array<ResponseOutputItem>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

parallel_tool_calls: boolean

Whether to allow the model to run tool calls in parallel.

temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: ToolChoiceOptions | ToolChoiceAllowed { mode, tools, type } | ToolChoiceTypes { type } | 5 more

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

Accepts one of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
ToolChoiceAllowed { mode, tools, type }

Constrains the tools available to the model to a pre-defined set.

mode: "auto" | "required"

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: Array<Record<string, unknown>>

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: "allowed_tools"

Allowed tool configuration type. Always allowed_tools.

ToolChoiceTypes { type }

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: "file_search" | "web_search_preview" | "computer_use_preview" | 3 more

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

ToolChoiceCustom { name, type }

Use this option to force the model to call a specific custom tool.

name: string

The name of the custom tool to call.

type: "custom"

For custom tool calling, the type is always custom.

ToolChoiceApplyPatch { type }

Forces the model to call the apply_patch tool when executing a tool call.

type: "apply_patch"

The tool to call. Always apply_patch.

ToolChoiceShell { type }

Forces the model to call the shell tool when a tool call is required.

type: "shell"

The tool to call. Always shell.

tools: Array<Tool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Accepts one of the following:
FunctionTool { name, parameters, strict, 2 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: Record<string, unknown> | null

A JSON schema object describing the parameters of the function.

strict: boolean | null

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

description?: string | null

A description of the function. Used by the model to determine whether or not to call the function.

FileSearchTool { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: Array<string>

The IDs of the vector stores to search.

filters?: ComparisonFilter { key, type, value } | CompoundFilter { filters, type } | null

A filter to apply.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
CompoundFilter { filters, type }

Combine multiple filters using and or or.

filters: Array<ComparisonFilter { key, type, value } | unknown>

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
unknown
type: "and" | "or"

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results?: number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: RankingOptions { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker?: "auto" | "default-2024-11-15"

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold?: number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

ComputerTool { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" | "mac" | "linux" | 2 more

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearchTool { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" | "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters?: Filters | null

Filters for the search.

allowed_domains?: Array<string> | null

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The approximate location of the user.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: "approximate"

The type of location approximation. Always approximate.

Mcp { server_label, type, allowed_tools, 6 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

Accepts one of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

Accepts one of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string | CodeInterpreterToolAuto { type, file_ids, memory_limit }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
string
CodeInterpreterToolAuto { type, file_ids, memory_limit }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids?: Array<string>

An optional list of uploaded files to make available to your code.

memory_limit?: "1g" | "4g" | "16g" | "64g" | null

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action?: "generate" | "edit" | "auto"

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background?: "transparent" | "opaque" | "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity?: "high" | "low" | null

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask?: InputImageMask { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: string

File ID for the mask image.

image_url?: string

Base64-encoded mask image.

model?: (string & {}) | "gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
(string & {})
"gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation?: "auto" | "low"

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression?: number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format?: "png" | "webp" | "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images?: number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality?: "low" | "medium" | "high" | "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size?: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

FunctionShellTool { type }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

CustomTool { name, type, description, format }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

description?: string

Optional description of the custom tool, used to provide more context.

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
Text { type }

Unconstrained free-form text.

type: "text"

Unconstrained text format. Always text.

Grammar { definition, syntax, type }

A grammar defined by the user.

definition: string

The grammar definition.

syntax: "lark" | "regex"

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: "grammar"

Grammar format. Always grammar.

WebSearchPreviewTool { type, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" | "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatchTool { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
background?: boolean | null

Whether to run the model response in the background. Learn more.

completed_at?: number | null

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

conversation?: Conversation | null

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: string

The unique ID of the conversation that this response was associated with.

max_output_tokens?: number | null

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: number | null

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

previous_response_id?: string | null

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

id: string

The unique identifier of the prompt template to use.

variables?: Record<string, string | ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } > | null

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

version?: string | null

Optional version of the prompt template.

prompt_cache_key?: string

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: "in-memory" | "24h" | null

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

Accepts one of the following:
"in-memory"
"24h"
reasoning?: Reasoning { effort, generate_summary, summary } | null

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary?: "auto" | "concise" | "detailed" | null

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary?: "auto" | "concise" | "detailed" | null

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
safety_identifier?: string

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier?: "auto" | "default" | "flex" | 2 more | null

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

Accepts one of the following:
"auto"
"default"
"flex"
"scale"
"priority"

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

Accepts one of the following:
"completed"
"failed"
"in_progress"
"cancelled"
"queued"
"incomplete"
text?: ResponseTextConfig { format, verbosity }

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
ResponseFormatText { type }

Default response format. Used to generate text responses.

type: "text"

The type of response format being defined. Always text.

ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: string

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Record<string, unknown>

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: "json_schema"

The type of response format being defined. Always json_schema.

description?: string

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: boolean | null

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: "json_object"

The type of response format being defined. Always json_object.

verbosity?: "low" | "medium" | "high" | null

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
top_logprobs?: number | null

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

minimum0
maximum20
truncation?: "auto" | "disabled" | null

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Accepts one of the following:
"auto"
"disabled"
usage?: ResponseUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Deprecateduser?: string

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

sequence_number: number

The sequence number for this event.

type: "response.created"

The type of the event. Always response.created.

ResponseErrorEvent { code, message, param, 2 more }

Emitted when an error occurs.

code: string | null

The error code.

message: string

The error message.

param: string | null

The error parameter.

sequence_number: number

The sequence number of this event.

type: "error"

The type of the event. Always error.

ResponseFileSearchCallCompletedEvent { item_id, output_index, sequence_number, type }

Emitted when a file search call is completed (results found).

item_id: string

The ID of the output item that the file search call is initiated.

output_index: number

The index of the output item that the file search call is initiated.

sequence_number: number

The sequence number of this event.

type: "response.file_search_call.completed"

The type of the event. Always response.file_search_call.completed.

ResponseFileSearchCallInProgressEvent { item_id, output_index, sequence_number, type }

Emitted when a file search call is initiated.

item_id: string

The ID of the output item that the file search call is initiated.

output_index: number

The index of the output item that the file search call is initiated.

sequence_number: number

The sequence number of this event.

type: "response.file_search_call.in_progress"

The type of the event. Always response.file_search_call.in_progress.

ResponseFileSearchCallSearchingEvent { item_id, output_index, sequence_number, type }

Emitted when a file search is currently searching.

item_id: string

The ID of the output item that the file search call is initiated.

output_index: number

The index of the output item that the file search call is searching.

sequence_number: number

The sequence number of this event.

type: "response.file_search_call.searching"

The type of the event. Always response.file_search_call.searching.

ResponseFunctionCallArgumentsDeltaEvent { delta, item_id, output_index, 2 more }

Emitted when there is a partial function-call arguments delta.

delta: string

The function-call arguments delta that is added.

item_id: string

The ID of the output item that the function-call arguments delta is added to.

output_index: number

The index of the output item that the function-call arguments delta is added to.

sequence_number: number

The sequence number of this event.

type: "response.function_call_arguments.delta"

The type of the event. Always response.function_call_arguments.delta.

ResponseFunctionCallArgumentsDoneEvent { arguments, item_id, name, 3 more }

Emitted when function-call arguments are finalized.

arguments: string

The function-call arguments.

item_id: string

The ID of the item.

name: string

The name of the function that was called.

output_index: number

The index of the output item.

sequence_number: number

The sequence number of this event.

type: "response.function_call_arguments.done"
ResponseInProgressEvent { response, sequence_number, type }

Emitted when the response is in progress.

response: Response { id, created_at, error, 29 more }

The response that is in progress.

id: string

Unique identifier for this Response.

created_at: number

Unix timestamp (in seconds) of when this Response was created.

error: ResponseError { code, message } | null

An error object returned when the model fails to generate a Response.

code: "server_error" | "rate_limit_exceeded" | "invalid_prompt" | 15 more

The error code for the response.

Accepts one of the following:
"server_error"
"rate_limit_exceeded"
"invalid_prompt"
"vector_store_timeout"
"invalid_image"
"invalid_image_format"
"invalid_base64_image"
"invalid_image_url"
"image_too_large"
"image_too_small"
"image_parse_error"
"image_content_policy_violation"
"invalid_image_mode"
"image_file_too_large"
"unsupported_image_media_type"
"empty_image_file"
"failed_to_download_image"
"image_file_not_found"
message: string

A human-readable description of the error.

incomplete_details: IncompleteDetails | null

Details about why the response is incomplete.

reason?: "max_output_tokens" | "content_filter"

The reason why the response is incomplete.

Accepts one of the following:
"max_output_tokens"
"content_filter"
instructions: string | Array<ResponseInputItem> | null

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Accepts one of the following:
string
EasyInputMessage { content, role, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: string | ResponseInputMessageContentList { , , }

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
string
ResponseInputMessageContentList = Array<ResponseInputContent>

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

Message { content, role, status, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "system" | "developer"

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status?: "in_progress" | "completed" | "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type?: "message"

The type of the message input. Always set to message.

ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ComputerCallOutput { call_id, output, type, 3 more }

The output of a computer tool call.

call_id: string

The ID of the computer tool call that produced the output.

maxLength64
minLength1
output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id?: string

The identifier of an uploaded file that contains the screenshot.

image_url?: string

The URL of the screenshot image.

type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

id?: string | null

The ID of the computer tool call output.

acknowledged_safety_checks?: Array<AcknowledgedSafetyCheck> | null

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
FunctionCallOutput { call_id, output, type, 2 more }

The output of a function tool call.

call_id: string

The unique ID of the function tool call generated by the model.

maxLength64
minLength1
output: string | ResponseFunctionCallOutputItemList { , , }

Text, image, or file output of the function tool call.

Accepts one of the following:
string
ResponseFunctionCallOutputItemList = Array<ResponseFunctionCallOutputItem>

An array of content outputs (text, image, file) for the function tool call.

Accepts one of the following:
ResponseInputTextContent { text, type }

A text input to the model.

text: string

The text input to the model.

maxLength10485760
type: "input_text"

The type of the input item. Always input_text.

ResponseInputImageContent { type, detail, file_id, image_url }

An image input to the model. Learn about image inputs

type: "input_image"

The type of the input item. Always input_image.

detail?: "low" | "high" | "auto" | null

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
ResponseInputFileContent { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string | null

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string | null

The URL of the file to be sent to the model.

filename?: string | null

The name of the file to be sent to the model.

type: "function_call_output"

The type of the function tool call output. Always function_call_output.

id?: string | null

The unique ID of the function tool call output. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItemParam { encrypted_content, type, id }

A compaction item generated by the v1/responses/compact API.

encrypted_content: string

The encrypted content of the compaction summary.

maxLength10485760
type: "compaction"

The type of the item. Always compaction.

id?: string | null

The ID of the compaction item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCall { action, call_id, type, 2 more }

A tool representing a request to execute one or more shell commands.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>

Ordered shell commands for the execution environment to run.

max_output_length?: number | null

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms?: number | null

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: "shell_call"

The type of the item. Always shell_call.

id?: string | null

The unique ID of the shell tool call. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCallOutput { call_id, output, type, 3 more }

The streamed output items emitted by a shell tool call.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
output: Array<ResponseFunctionShellCallOutputContent { outcome, stderr, stdout } >

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Timeout { type } | Exit { exit_code, type }

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

The exit code returned by the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

Captured stderr output for the shell call.

maxLength10485760
stdout: string

Captured stdout output for the shell call.

maxLength10485760
type: "shell_call_output"

The type of the item. Always shell_call_output.

id?: string | null

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length?: number | null

The maximum number of UTF-8 characters captured for this shell call's combined output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ApplyPatchCall { call_id, operation, status, 2 more }

A tool call representing a request to create, delete, or update files using diff patches.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction for creating a new file via the apply_patch tool.

diff: string

Unified diff content to apply when creating the file.

maxLength10485760
path: string

Path of the file to create relative to the workspace root.

minLength1
type: "create_file"

The operation type. Always create_file.

DeleteFile { path, type }

Instruction for deleting an existing file via the apply_patch tool.

path: string

Path of the file to delete relative to the workspace root.

minLength1
type: "delete_file"

The operation type. Always delete_file.

UpdateFile { diff, path, type }

Instruction for updating an existing file via the apply_patch tool.

diff: string

Unified diff content to apply to the existing file.

maxLength10485760
path: string

Path of the file to update relative to the workspace root.

minLength1
type: "update_file"

The operation type. Always update_file.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

id?: string | null

The unique ID of the apply patch tool call. Populated when this item is returned via API.

ApplyPatchCallOutput { call_id, status, type, 2 more }

The streamed output emitted by an apply patch tool call.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

id?: string | null

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output?: string | null

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse { approval_request_id, approve, type, 2 more }

A response to an MCP approval request.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

id?: string | null

The unique ID of the approval response

reason?: string | null

Optional reason for the decision.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
ResponseCustomToolCallOutput { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string | Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
string
Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id?: string

The unique ID of the custom tool call output in the OpenAI platform.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

ItemReference { id, type }

An internal identifier for an item to reference.

id: string

The ID of the item to reference.

type?: "item_reference" | null

The type of item to reference. Always item_reference.

metadata: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Accepts one of the following:
(string & {})
ChatModel = "gpt-5.2" | "gpt-5.2-2025-12-11" | "gpt-5.2-chat-latest" | 69 more
Accepts one of the following:
"gpt-5.2"
"gpt-5.2-2025-12-11"
"gpt-5.2-chat-latest"
"gpt-5.2-pro"
"gpt-5.2-pro-2025-12-11"
"gpt-5.1"
"gpt-5.1-2025-11-13"
"gpt-5.1-codex"
"gpt-5.1-mini"
"gpt-5.1-chat-latest"
"gpt-5"
"gpt-5-mini"
"gpt-5-nano"
"gpt-5-2025-08-07"
"gpt-5-mini-2025-08-07"
"gpt-5-nano-2025-08-07"
"gpt-5-chat-latest"
"gpt-4.1"
"gpt-4.1-mini"
"gpt-4.1-nano"
"gpt-4.1-2025-04-14"
"gpt-4.1-mini-2025-04-14"
"gpt-4.1-nano-2025-04-14"
"o4-mini"
"o4-mini-2025-04-16"
"o3"
"o3-2025-04-16"
"o3-mini"
"o3-mini-2025-01-31"
"o1"
"o1-2024-12-17"
"o1-preview"
"o1-preview-2024-09-12"
"o1-mini"
"o1-mini-2024-09-12"
"gpt-4o"
"gpt-4o-2024-11-20"
"gpt-4o-2024-08-06"
"gpt-4o-2024-05-13"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-audio-preview-2025-06-03"
"gpt-4o-mini-audio-preview"
"gpt-4o-mini-audio-preview-2024-12-17"
"gpt-4o-search-preview"
"gpt-4o-mini-search-preview"
"gpt-4o-search-preview-2025-03-11"
"gpt-4o-mini-search-preview-2025-03-11"
"chatgpt-4o-latest"
"codex-mini-latest"
"gpt-4o-mini"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-0125-preview"
"gpt-4-turbo-preview"
"gpt-4-1106-preview"
"gpt-4-vision-preview"
"gpt-4"
"gpt-4-0314"
"gpt-4-0613"
"gpt-4-32k"
"gpt-4-32k-0314"
"gpt-4-32k-0613"
"gpt-3.5-turbo"
"gpt-3.5-turbo-16k"
"gpt-3.5-turbo-0301"
"gpt-3.5-turbo-0613"
"gpt-3.5-turbo-1106"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo-16k-0613"
"o1-pro" | "o1-pro-2025-03-19" | "o3-pro" | 11 more
"o1-pro"
"o1-pro-2025-03-19"
"o3-pro"
"o3-pro-2025-06-10"
"o3-deep-research"
"o3-deep-research-2025-06-26"
"o4-mini-deep-research"
"o4-mini-deep-research-2025-06-26"
"computer-use-preview"
"computer-use-preview-2025-03-11"
"gpt-5-codex"
"gpt-5-pro"
"gpt-5-pro-2025-10-06"
"gpt-5.1-codex-max"
object: "response"

The object type of this resource - always set to response.

output: Array<ResponseOutputItem>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

parallel_tool_calls: boolean

Whether to allow the model to run tool calls in parallel.

temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: ToolChoiceOptions | ToolChoiceAllowed { mode, tools, type } | ToolChoiceTypes { type } | 5 more

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

Accepts one of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
ToolChoiceAllowed { mode, tools, type }

Constrains the tools available to the model to a pre-defined set.

mode: "auto" | "required"

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: Array<Record<string, unknown>>

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: "allowed_tools"

Allowed tool configuration type. Always allowed_tools.

ToolChoiceTypes { type }

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: "file_search" | "web_search_preview" | "computer_use_preview" | 3 more

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

ToolChoiceCustom { name, type }

Use this option to force the model to call a specific custom tool.

name: string

The name of the custom tool to call.

type: "custom"

For custom tool calling, the type is always custom.

ToolChoiceApplyPatch { type }

Forces the model to call the apply_patch tool when executing a tool call.

type: "apply_patch"

The tool to call. Always apply_patch.

ToolChoiceShell { type }

Forces the model to call the shell tool when a tool call is required.

type: "shell"

The tool to call. Always shell.

tools: Array<Tool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Accepts one of the following:
FunctionTool { name, parameters, strict, 2 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: Record<string, unknown> | null

A JSON schema object describing the parameters of the function.

strict: boolean | null

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

description?: string | null

A description of the function. Used by the model to determine whether or not to call the function.

FileSearchTool { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: Array<string>

The IDs of the vector stores to search.

filters?: ComparisonFilter { key, type, value } | CompoundFilter { filters, type } | null

A filter to apply.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
CompoundFilter { filters, type }

Combine multiple filters using and or or.

filters: Array<ComparisonFilter { key, type, value } | unknown>

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
unknown
type: "and" | "or"

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results?: number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: RankingOptions { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker?: "auto" | "default-2024-11-15"

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold?: number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

ComputerTool { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" | "mac" | "linux" | 2 more

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearchTool { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" | "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters?: Filters | null

Filters for the search.

allowed_domains?: Array<string> | null

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The approximate location of the user.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: "approximate"

The type of location approximation. Always approximate.

Mcp { server_label, type, allowed_tools, 6 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

Accepts one of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

Accepts one of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string | CodeInterpreterToolAuto { type, file_ids, memory_limit }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
string
CodeInterpreterToolAuto { type, file_ids, memory_limit }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids?: Array<string>

An optional list of uploaded files to make available to your code.

memory_limit?: "1g" | "4g" | "16g" | "64g" | null

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action?: "generate" | "edit" | "auto"

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background?: "transparent" | "opaque" | "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity?: "high" | "low" | null

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask?: InputImageMask { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: string

File ID for the mask image.

image_url?: string

Base64-encoded mask image.

model?: (string & {}) | "gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
(string & {})
"gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation?: "auto" | "low"

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression?: number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format?: "png" | "webp" | "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images?: number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality?: "low" | "medium" | "high" | "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size?: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

FunctionShellTool { type }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

CustomTool { name, type, description, format }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

description?: string

Optional description of the custom tool, used to provide more context.

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
Text { type }

Unconstrained free-form text.

type: "text"

Unconstrained text format. Always text.

Grammar { definition, syntax, type }

A grammar defined by the user.

definition: string

The grammar definition.

syntax: "lark" | "regex"

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: "grammar"

Grammar format. Always grammar.

WebSearchPreviewTool { type, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" | "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatchTool { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
background?: boolean | null

Whether to run the model response in the background. Learn more.

completed_at?: number | null

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

conversation?: Conversation | null

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: string

The unique ID of the conversation that this response was associated with.

max_output_tokens?: number | null

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: number | null

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

previous_response_id?: string | null

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

id: string

The unique identifier of the prompt template to use.

variables?: Record<string, string | ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } > | null

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

version?: string | null

Optional version of the prompt template.

prompt_cache_key?: string

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: "in-memory" | "24h" | null

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

Accepts one of the following:
"in-memory"
"24h"
reasoning?: Reasoning { effort, generate_summary, summary } | null

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary?: "auto" | "concise" | "detailed" | null

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary?: "auto" | "concise" | "detailed" | null

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
safety_identifier?: string

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier?: "auto" | "default" | "flex" | 2 more | null

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

Accepts one of the following:
"auto"
"default"
"flex"
"scale"
"priority"

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

Accepts one of the following:
"completed"
"failed"
"in_progress"
"cancelled"
"queued"
"incomplete"
text?: ResponseTextConfig { format, verbosity }

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
ResponseFormatText { type }

Default response format. Used to generate text responses.

type: "text"

The type of response format being defined. Always text.

ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: string

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Record<string, unknown>

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: "json_schema"

The type of response format being defined. Always json_schema.

description?: string

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: boolean | null

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: "json_object"

The type of response format being defined. Always json_object.

verbosity?: "low" | "medium" | "high" | null

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
top_logprobs?: number | null

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

minimum0
maximum20
truncation?: "auto" | "disabled" | null

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Accepts one of the following:
"auto"
"disabled"
usage?: ResponseUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Deprecateduser?: string

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

sequence_number: number

The sequence number of this event.

type: "response.in_progress"

The type of the event. Always response.in_progress.

ResponseFailedEvent { response, sequence_number, type }

An event that is emitted when a response fails.

response: Response { id, created_at, error, 29 more }

The response that failed.

id: string

Unique identifier for this Response.

created_at: number

Unix timestamp (in seconds) of when this Response was created.

error: ResponseError { code, message } | null

An error object returned when the model fails to generate a Response.

code: "server_error" | "rate_limit_exceeded" | "invalid_prompt" | 15 more

The error code for the response.

Accepts one of the following:
"server_error"
"rate_limit_exceeded"
"invalid_prompt"
"vector_store_timeout"
"invalid_image"
"invalid_image_format"
"invalid_base64_image"
"invalid_image_url"
"image_too_large"
"image_too_small"
"image_parse_error"
"image_content_policy_violation"
"invalid_image_mode"
"image_file_too_large"
"unsupported_image_media_type"
"empty_image_file"
"failed_to_download_image"
"image_file_not_found"
message: string

A human-readable description of the error.

incomplete_details: IncompleteDetails | null

Details about why the response is incomplete.

reason?: "max_output_tokens" | "content_filter"

The reason why the response is incomplete.

Accepts one of the following:
"max_output_tokens"
"content_filter"
instructions: string | Array<ResponseInputItem> | null

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Accepts one of the following:
string
EasyInputMessage { content, role, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: string | ResponseInputMessageContentList { , , }

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
string
ResponseInputMessageContentList = Array<ResponseInputContent>

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

Message { content, role, status, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "system" | "developer"

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status?: "in_progress" | "completed" | "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type?: "message"

The type of the message input. Always set to message.

ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ComputerCallOutput { call_id, output, type, 3 more }

The output of a computer tool call.

call_id: string

The ID of the computer tool call that produced the output.

maxLength64
minLength1
output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id?: string

The identifier of an uploaded file that contains the screenshot.

image_url?: string

The URL of the screenshot image.

type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

id?: string | null

The ID of the computer tool call output.

acknowledged_safety_checks?: Array<AcknowledgedSafetyCheck> | null

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
FunctionCallOutput { call_id, output, type, 2 more }

The output of a function tool call.

call_id: string

The unique ID of the function tool call generated by the model.

maxLength64
minLength1
output: string | ResponseFunctionCallOutputItemList { , , }

Text, image, or file output of the function tool call.

Accepts one of the following:
string
ResponseFunctionCallOutputItemList = Array<ResponseFunctionCallOutputItem>

An array of content outputs (text, image, file) for the function tool call.

Accepts one of the following:
ResponseInputTextContent { text, type }

A text input to the model.

text: string

The text input to the model.

maxLength10485760
type: "input_text"

The type of the input item. Always input_text.

ResponseInputImageContent { type, detail, file_id, image_url }

An image input to the model. Learn about image inputs

type: "input_image"

The type of the input item. Always input_image.

detail?: "low" | "high" | "auto" | null

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
ResponseInputFileContent { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string | null

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string | null

The URL of the file to be sent to the model.

filename?: string | null

The name of the file to be sent to the model.

type: "function_call_output"

The type of the function tool call output. Always function_call_output.

id?: string | null

The unique ID of the function tool call output. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItemParam { encrypted_content, type, id }

A compaction item generated by the v1/responses/compact API.

encrypted_content: string

The encrypted content of the compaction summary.

maxLength10485760
type: "compaction"

The type of the item. Always compaction.

id?: string | null

The ID of the compaction item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCall { action, call_id, type, 2 more }

A tool representing a request to execute one or more shell commands.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>

Ordered shell commands for the execution environment to run.

max_output_length?: number | null

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms?: number | null

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: "shell_call"

The type of the item. Always shell_call.

id?: string | null

The unique ID of the shell tool call. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCallOutput { call_id, output, type, 3 more }

The streamed output items emitted by a shell tool call.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
output: Array<ResponseFunctionShellCallOutputContent { outcome, stderr, stdout } >

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Timeout { type } | Exit { exit_code, type }

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

The exit code returned by the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

Captured stderr output for the shell call.

maxLength10485760
stdout: string

Captured stdout output for the shell call.

maxLength10485760
type: "shell_call_output"

The type of the item. Always shell_call_output.

id?: string | null

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length?: number | null

The maximum number of UTF-8 characters captured for this shell call's combined output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ApplyPatchCall { call_id, operation, status, 2 more }

A tool call representing a request to create, delete, or update files using diff patches.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction for creating a new file via the apply_patch tool.

diff: string

Unified diff content to apply when creating the file.

maxLength10485760
path: string

Path of the file to create relative to the workspace root.

minLength1
type: "create_file"

The operation type. Always create_file.

DeleteFile { path, type }

Instruction for deleting an existing file via the apply_patch tool.

path: string

Path of the file to delete relative to the workspace root.

minLength1
type: "delete_file"

The operation type. Always delete_file.

UpdateFile { diff, path, type }

Instruction for updating an existing file via the apply_patch tool.

diff: string

Unified diff content to apply to the existing file.

maxLength10485760
path: string

Path of the file to update relative to the workspace root.

minLength1
type: "update_file"

The operation type. Always update_file.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

id?: string | null

The unique ID of the apply patch tool call. Populated when this item is returned via API.

ApplyPatchCallOutput { call_id, status, type, 2 more }

The streamed output emitted by an apply patch tool call.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

id?: string | null

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output?: string | null

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse { approval_request_id, approve, type, 2 more }

A response to an MCP approval request.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

id?: string | null

The unique ID of the approval response

reason?: string | null

Optional reason for the decision.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
ResponseCustomToolCallOutput { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string | Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
string
Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id?: string

The unique ID of the custom tool call output in the OpenAI platform.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

ItemReference { id, type }

An internal identifier for an item to reference.

id: string

The ID of the item to reference.

type?: "item_reference" | null

The type of item to reference. Always item_reference.

metadata: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Accepts one of the following:
(string & {})
ChatModel = "gpt-5.2" | "gpt-5.2-2025-12-11" | "gpt-5.2-chat-latest" | 69 more
Accepts one of the following:
"gpt-5.2"
"gpt-5.2-2025-12-11"
"gpt-5.2-chat-latest"
"gpt-5.2-pro"
"gpt-5.2-pro-2025-12-11"
"gpt-5.1"
"gpt-5.1-2025-11-13"
"gpt-5.1-codex"
"gpt-5.1-mini"
"gpt-5.1-chat-latest"
"gpt-5"
"gpt-5-mini"
"gpt-5-nano"
"gpt-5-2025-08-07"
"gpt-5-mini-2025-08-07"
"gpt-5-nano-2025-08-07"
"gpt-5-chat-latest"
"gpt-4.1"
"gpt-4.1-mini"
"gpt-4.1-nano"
"gpt-4.1-2025-04-14"
"gpt-4.1-mini-2025-04-14"
"gpt-4.1-nano-2025-04-14"
"o4-mini"
"o4-mini-2025-04-16"
"o3"
"o3-2025-04-16"
"o3-mini"
"o3-mini-2025-01-31"
"o1"
"o1-2024-12-17"
"o1-preview"
"o1-preview-2024-09-12"
"o1-mini"
"o1-mini-2024-09-12"
"gpt-4o"
"gpt-4o-2024-11-20"
"gpt-4o-2024-08-06"
"gpt-4o-2024-05-13"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-audio-preview-2025-06-03"
"gpt-4o-mini-audio-preview"
"gpt-4o-mini-audio-preview-2024-12-17"
"gpt-4o-search-preview"
"gpt-4o-mini-search-preview"
"gpt-4o-search-preview-2025-03-11"
"gpt-4o-mini-search-preview-2025-03-11"
"chatgpt-4o-latest"
"codex-mini-latest"
"gpt-4o-mini"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-0125-preview"
"gpt-4-turbo-preview"
"gpt-4-1106-preview"
"gpt-4-vision-preview"
"gpt-4"
"gpt-4-0314"
"gpt-4-0613"
"gpt-4-32k"
"gpt-4-32k-0314"
"gpt-4-32k-0613"
"gpt-3.5-turbo"
"gpt-3.5-turbo-16k"
"gpt-3.5-turbo-0301"
"gpt-3.5-turbo-0613"
"gpt-3.5-turbo-1106"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo-16k-0613"
"o1-pro" | "o1-pro-2025-03-19" | "o3-pro" | 11 more
"o1-pro"
"o1-pro-2025-03-19"
"o3-pro"
"o3-pro-2025-06-10"
"o3-deep-research"
"o3-deep-research-2025-06-26"
"o4-mini-deep-research"
"o4-mini-deep-research-2025-06-26"
"computer-use-preview"
"computer-use-preview-2025-03-11"
"gpt-5-codex"
"gpt-5-pro"
"gpt-5-pro-2025-10-06"
"gpt-5.1-codex-max"
object: "response"

The object type of this resource - always set to response.

output: Array<ResponseOutputItem>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

parallel_tool_calls: boolean

Whether to allow the model to run tool calls in parallel.

temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: ToolChoiceOptions | ToolChoiceAllowed { mode, tools, type } | ToolChoiceTypes { type } | 5 more

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

Accepts one of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
ToolChoiceAllowed { mode, tools, type }

Constrains the tools available to the model to a pre-defined set.

mode: "auto" | "required"

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: Array<Record<string, unknown>>

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: "allowed_tools"

Allowed tool configuration type. Always allowed_tools.

ToolChoiceTypes { type }

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: "file_search" | "web_search_preview" | "computer_use_preview" | 3 more

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

ToolChoiceCustom { name, type }

Use this option to force the model to call a specific custom tool.

name: string

The name of the custom tool to call.

type: "custom"

For custom tool calling, the type is always custom.

ToolChoiceApplyPatch { type }

Forces the model to call the apply_patch tool when executing a tool call.

type: "apply_patch"

The tool to call. Always apply_patch.

ToolChoiceShell { type }

Forces the model to call the shell tool when a tool call is required.

type: "shell"

The tool to call. Always shell.

tools: Array<Tool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Accepts one of the following:
FunctionTool { name, parameters, strict, 2 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: Record<string, unknown> | null

A JSON schema object describing the parameters of the function.

strict: boolean | null

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

description?: string | null

A description of the function. Used by the model to determine whether or not to call the function.

FileSearchTool { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: Array<string>

The IDs of the vector stores to search.

filters?: ComparisonFilter { key, type, value } | CompoundFilter { filters, type } | null

A filter to apply.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
CompoundFilter { filters, type }

Combine multiple filters using and or or.

filters: Array<ComparisonFilter { key, type, value } | unknown>

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
unknown
type: "and" | "or"

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results?: number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: RankingOptions { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker?: "auto" | "default-2024-11-15"

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold?: number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

ComputerTool { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" | "mac" | "linux" | 2 more

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearchTool { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" | "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters?: Filters | null

Filters for the search.

allowed_domains?: Array<string> | null

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The approximate location of the user.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: "approximate"

The type of location approximation. Always approximate.

Mcp { server_label, type, allowed_tools, 6 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

Accepts one of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

Accepts one of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string | CodeInterpreterToolAuto { type, file_ids, memory_limit }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
string
CodeInterpreterToolAuto { type, file_ids, memory_limit }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids?: Array<string>

An optional list of uploaded files to make available to your code.

memory_limit?: "1g" | "4g" | "16g" | "64g" | null

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action?: "generate" | "edit" | "auto"

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background?: "transparent" | "opaque" | "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity?: "high" | "low" | null

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask?: InputImageMask { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: string

File ID for the mask image.

image_url?: string

Base64-encoded mask image.

model?: (string & {}) | "gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
(string & {})
"gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation?: "auto" | "low"

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression?: number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format?: "png" | "webp" | "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images?: number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality?: "low" | "medium" | "high" | "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size?: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

FunctionShellTool { type }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

CustomTool { name, type, description, format }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

description?: string

Optional description of the custom tool, used to provide more context.

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
Text { type }

Unconstrained free-form text.

type: "text"

Unconstrained text format. Always text.

Grammar { definition, syntax, type }

A grammar defined by the user.

definition: string

The grammar definition.

syntax: "lark" | "regex"

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: "grammar"

Grammar format. Always grammar.

WebSearchPreviewTool { type, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" | "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatchTool { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
background?: boolean | null

Whether to run the model response in the background. Learn more.

completed_at?: number | null

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

conversation?: Conversation | null

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: string

The unique ID of the conversation that this response was associated with.

max_output_tokens?: number | null

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: number | null

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

previous_response_id?: string | null

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

id: string

The unique identifier of the prompt template to use.

variables?: Record<string, string | ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } > | null

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

version?: string | null

Optional version of the prompt template.

prompt_cache_key?: string

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: "in-memory" | "24h" | null

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

Accepts one of the following:
"in-memory"
"24h"
reasoning?: Reasoning { effort, generate_summary, summary } | null

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary?: "auto" | "concise" | "detailed" | null

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary?: "auto" | "concise" | "detailed" | null

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
safety_identifier?: string

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier?: "auto" | "default" | "flex" | 2 more | null

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

Accepts one of the following:
"auto"
"default"
"flex"
"scale"
"priority"

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

Accepts one of the following:
"completed"
"failed"
"in_progress"
"cancelled"
"queued"
"incomplete"
text?: ResponseTextConfig { format, verbosity }

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
ResponseFormatText { type }

Default response format. Used to generate text responses.

type: "text"

The type of response format being defined. Always text.

ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: string

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Record<string, unknown>

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: "json_schema"

The type of response format being defined. Always json_schema.

description?: string

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: boolean | null

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: "json_object"

The type of response format being defined. Always json_object.

verbosity?: "low" | "medium" | "high" | null

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
top_logprobs?: number | null

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

minimum0
maximum20
truncation?: "auto" | "disabled" | null

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Accepts one of the following:
"auto"
"disabled"
usage?: ResponseUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Deprecateduser?: string

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

sequence_number: number

The sequence number of this event.

type: "response.failed"

The type of the event. Always response.failed.

ResponseIncompleteEvent { response, sequence_number, type }

An event that is emitted when a response finishes as incomplete.

response: Response { id, created_at, error, 29 more }

The response that was incomplete.

id: string

Unique identifier for this Response.

created_at: number

Unix timestamp (in seconds) of when this Response was created.

error: ResponseError { code, message } | null

An error object returned when the model fails to generate a Response.

code: "server_error" | "rate_limit_exceeded" | "invalid_prompt" | 15 more

The error code for the response.

Accepts one of the following:
"server_error"
"rate_limit_exceeded"
"invalid_prompt"
"vector_store_timeout"
"invalid_image"
"invalid_image_format"
"invalid_base64_image"
"invalid_image_url"
"image_too_large"
"image_too_small"
"image_parse_error"
"image_content_policy_violation"
"invalid_image_mode"
"image_file_too_large"
"unsupported_image_media_type"
"empty_image_file"
"failed_to_download_image"
"image_file_not_found"
message: string

A human-readable description of the error.

incomplete_details: IncompleteDetails | null

Details about why the response is incomplete.

reason?: "max_output_tokens" | "content_filter"

The reason why the response is incomplete.

Accepts one of the following:
"max_output_tokens"
"content_filter"
instructions: string | Array<ResponseInputItem> | null

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Accepts one of the following:
string
EasyInputMessage { content, role, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: string | ResponseInputMessageContentList { , , }

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
string
ResponseInputMessageContentList = Array<ResponseInputContent>

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

Message { content, role, status, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "system" | "developer"

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status?: "in_progress" | "completed" | "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type?: "message"

The type of the message input. Always set to message.

ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ComputerCallOutput { call_id, output, type, 3 more }

The output of a computer tool call.

call_id: string

The ID of the computer tool call that produced the output.

maxLength64
minLength1
output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id?: string

The identifier of an uploaded file that contains the screenshot.

image_url?: string

The URL of the screenshot image.

type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

id?: string | null

The ID of the computer tool call output.

acknowledged_safety_checks?: Array<AcknowledgedSafetyCheck> | null

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
FunctionCallOutput { call_id, output, type, 2 more }

The output of a function tool call.

call_id: string

The unique ID of the function tool call generated by the model.

maxLength64
minLength1
output: string | ResponseFunctionCallOutputItemList { , , }

Text, image, or file output of the function tool call.

Accepts one of the following:
string
ResponseFunctionCallOutputItemList = Array<ResponseFunctionCallOutputItem>

An array of content outputs (text, image, file) for the function tool call.

Accepts one of the following:
ResponseInputTextContent { text, type }

A text input to the model.

text: string

The text input to the model.

maxLength10485760
type: "input_text"

The type of the input item. Always input_text.

ResponseInputImageContent { type, detail, file_id, image_url }

An image input to the model. Learn about image inputs

type: "input_image"

The type of the input item. Always input_image.

detail?: "low" | "high" | "auto" | null

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
ResponseInputFileContent { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string | null

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string | null

The URL of the file to be sent to the model.

filename?: string | null

The name of the file to be sent to the model.

type: "function_call_output"

The type of the function tool call output. Always function_call_output.

id?: string | null

The unique ID of the function tool call output. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItemParam { encrypted_content, type, id }

A compaction item generated by the v1/responses/compact API.

encrypted_content: string

The encrypted content of the compaction summary.

maxLength10485760
type: "compaction"

The type of the item. Always compaction.

id?: string | null

The ID of the compaction item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCall { action, call_id, type, 2 more }

A tool representing a request to execute one or more shell commands.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>

Ordered shell commands for the execution environment to run.

max_output_length?: number | null

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms?: number | null

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: "shell_call"

The type of the item. Always shell_call.

id?: string | null

The unique ID of the shell tool call. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCallOutput { call_id, output, type, 3 more }

The streamed output items emitted by a shell tool call.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
output: Array<ResponseFunctionShellCallOutputContent { outcome, stderr, stdout } >

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Timeout { type } | Exit { exit_code, type }

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

The exit code returned by the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

Captured stderr output for the shell call.

maxLength10485760
stdout: string

Captured stdout output for the shell call.

maxLength10485760
type: "shell_call_output"

The type of the item. Always shell_call_output.

id?: string | null

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length?: number | null

The maximum number of UTF-8 characters captured for this shell call's combined output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ApplyPatchCall { call_id, operation, status, 2 more }

A tool call representing a request to create, delete, or update files using diff patches.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction for creating a new file via the apply_patch tool.

diff: string

Unified diff content to apply when creating the file.

maxLength10485760
path: string

Path of the file to create relative to the workspace root.

minLength1
type: "create_file"

The operation type. Always create_file.

DeleteFile { path, type }

Instruction for deleting an existing file via the apply_patch tool.

path: string

Path of the file to delete relative to the workspace root.

minLength1
type: "delete_file"

The operation type. Always delete_file.

UpdateFile { diff, path, type }

Instruction for updating an existing file via the apply_patch tool.

diff: string

Unified diff content to apply to the existing file.

maxLength10485760
path: string

Path of the file to update relative to the workspace root.

minLength1
type: "update_file"

The operation type. Always update_file.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

id?: string | null

The unique ID of the apply patch tool call. Populated when this item is returned via API.

ApplyPatchCallOutput { call_id, status, type, 2 more }

The streamed output emitted by an apply patch tool call.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

id?: string | null

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output?: string | null

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse { approval_request_id, approve, type, 2 more }

A response to an MCP approval request.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

id?: string | null

The unique ID of the approval response

reason?: string | null

Optional reason for the decision.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
ResponseCustomToolCallOutput { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string | Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
string
Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id?: string

The unique ID of the custom tool call output in the OpenAI platform.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

ItemReference { id, type }

An internal identifier for an item to reference.

id: string

The ID of the item to reference.

type?: "item_reference" | null

The type of item to reference. Always item_reference.

metadata: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Accepts one of the following:
(string & {})
ChatModel = "gpt-5.2" | "gpt-5.2-2025-12-11" | "gpt-5.2-chat-latest" | 69 more
Accepts one of the following:
"gpt-5.2"
"gpt-5.2-2025-12-11"
"gpt-5.2-chat-latest"
"gpt-5.2-pro"
"gpt-5.2-pro-2025-12-11"
"gpt-5.1"
"gpt-5.1-2025-11-13"
"gpt-5.1-codex"
"gpt-5.1-mini"
"gpt-5.1-chat-latest"
"gpt-5"
"gpt-5-mini"
"gpt-5-nano"
"gpt-5-2025-08-07"
"gpt-5-mini-2025-08-07"
"gpt-5-nano-2025-08-07"
"gpt-5-chat-latest"
"gpt-4.1"
"gpt-4.1-mini"
"gpt-4.1-nano"
"gpt-4.1-2025-04-14"
"gpt-4.1-mini-2025-04-14"
"gpt-4.1-nano-2025-04-14"
"o4-mini"
"o4-mini-2025-04-16"
"o3"
"o3-2025-04-16"
"o3-mini"
"o3-mini-2025-01-31"
"o1"
"o1-2024-12-17"
"o1-preview"
"o1-preview-2024-09-12"
"o1-mini"
"o1-mini-2024-09-12"
"gpt-4o"
"gpt-4o-2024-11-20"
"gpt-4o-2024-08-06"
"gpt-4o-2024-05-13"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-audio-preview-2025-06-03"
"gpt-4o-mini-audio-preview"
"gpt-4o-mini-audio-preview-2024-12-17"
"gpt-4o-search-preview"
"gpt-4o-mini-search-preview"
"gpt-4o-search-preview-2025-03-11"
"gpt-4o-mini-search-preview-2025-03-11"
"chatgpt-4o-latest"
"codex-mini-latest"
"gpt-4o-mini"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-0125-preview"
"gpt-4-turbo-preview"
"gpt-4-1106-preview"
"gpt-4-vision-preview"
"gpt-4"
"gpt-4-0314"
"gpt-4-0613"
"gpt-4-32k"
"gpt-4-32k-0314"
"gpt-4-32k-0613"
"gpt-3.5-turbo"
"gpt-3.5-turbo-16k"
"gpt-3.5-turbo-0301"
"gpt-3.5-turbo-0613"
"gpt-3.5-turbo-1106"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo-16k-0613"
"o1-pro" | "o1-pro-2025-03-19" | "o3-pro" | 11 more
"o1-pro"
"o1-pro-2025-03-19"
"o3-pro"
"o3-pro-2025-06-10"
"o3-deep-research"
"o3-deep-research-2025-06-26"
"o4-mini-deep-research"
"o4-mini-deep-research-2025-06-26"
"computer-use-preview"
"computer-use-preview-2025-03-11"
"gpt-5-codex"
"gpt-5-pro"
"gpt-5-pro-2025-10-06"
"gpt-5.1-codex-max"
object: "response"

The object type of this resource - always set to response.

output: Array<ResponseOutputItem>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

parallel_tool_calls: boolean

Whether to allow the model to run tool calls in parallel.

temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: ToolChoiceOptions | ToolChoiceAllowed { mode, tools, type } | ToolChoiceTypes { type } | 5 more

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

Accepts one of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
ToolChoiceAllowed { mode, tools, type }

Constrains the tools available to the model to a pre-defined set.

mode: "auto" | "required"

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: Array<Record<string, unknown>>

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: "allowed_tools"

Allowed tool configuration type. Always allowed_tools.

ToolChoiceTypes { type }

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: "file_search" | "web_search_preview" | "computer_use_preview" | 3 more

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

ToolChoiceCustom { name, type }

Use this option to force the model to call a specific custom tool.

name: string

The name of the custom tool to call.

type: "custom"

For custom tool calling, the type is always custom.

ToolChoiceApplyPatch { type }

Forces the model to call the apply_patch tool when executing a tool call.

type: "apply_patch"

The tool to call. Always apply_patch.

ToolChoiceShell { type }

Forces the model to call the shell tool when a tool call is required.

type: "shell"

The tool to call. Always shell.

tools: Array<Tool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Accepts one of the following:
FunctionTool { name, parameters, strict, 2 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: Record<string, unknown> | null

A JSON schema object describing the parameters of the function.

strict: boolean | null

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

description?: string | null

A description of the function. Used by the model to determine whether or not to call the function.

FileSearchTool { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: Array<string>

The IDs of the vector stores to search.

filters?: ComparisonFilter { key, type, value } | CompoundFilter { filters, type } | null

A filter to apply.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
CompoundFilter { filters, type }

Combine multiple filters using and or or.

filters: Array<ComparisonFilter { key, type, value } | unknown>

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
unknown
type: "and" | "or"

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results?: number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: RankingOptions { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker?: "auto" | "default-2024-11-15"

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold?: number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

ComputerTool { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" | "mac" | "linux" | 2 more

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearchTool { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" | "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters?: Filters | null

Filters for the search.

allowed_domains?: Array<string> | null

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The approximate location of the user.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: "approximate"

The type of location approximation. Always approximate.

Mcp { server_label, type, allowed_tools, 6 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

Accepts one of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

Accepts one of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string | CodeInterpreterToolAuto { type, file_ids, memory_limit }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
string
CodeInterpreterToolAuto { type, file_ids, memory_limit }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids?: Array<string>

An optional list of uploaded files to make available to your code.

memory_limit?: "1g" | "4g" | "16g" | "64g" | null

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action?: "generate" | "edit" | "auto"

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background?: "transparent" | "opaque" | "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity?: "high" | "low" | null

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask?: InputImageMask { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: string

File ID for the mask image.

image_url?: string

Base64-encoded mask image.

model?: (string & {}) | "gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
(string & {})
"gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation?: "auto" | "low"

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression?: number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format?: "png" | "webp" | "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images?: number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality?: "low" | "medium" | "high" | "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size?: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

FunctionShellTool { type }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

CustomTool { name, type, description, format }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

description?: string

Optional description of the custom tool, used to provide more context.

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
Text { type }

Unconstrained free-form text.

type: "text"

Unconstrained text format. Always text.

Grammar { definition, syntax, type }

A grammar defined by the user.

definition: string

The grammar definition.

syntax: "lark" | "regex"

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: "grammar"

Grammar format. Always grammar.

WebSearchPreviewTool { type, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" | "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatchTool { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
background?: boolean | null

Whether to run the model response in the background. Learn more.

completed_at?: number | null

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

conversation?: Conversation | null

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: string

The unique ID of the conversation that this response was associated with.

max_output_tokens?: number | null

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: number | null

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

previous_response_id?: string | null

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

id: string

The unique identifier of the prompt template to use.

variables?: Record<string, string | ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } > | null

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

version?: string | null

Optional version of the prompt template.

prompt_cache_key?: string

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: "in-memory" | "24h" | null

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

Accepts one of the following:
"in-memory"
"24h"
reasoning?: Reasoning { effort, generate_summary, summary } | null

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary?: "auto" | "concise" | "detailed" | null

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary?: "auto" | "concise" | "detailed" | null

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
safety_identifier?: string

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier?: "auto" | "default" | "flex" | 2 more | null

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

Accepts one of the following:
"auto"
"default"
"flex"
"scale"
"priority"

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

Accepts one of the following:
"completed"
"failed"
"in_progress"
"cancelled"
"queued"
"incomplete"
text?: ResponseTextConfig { format, verbosity }

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
ResponseFormatText { type }

Default response format. Used to generate text responses.

type: "text"

The type of response format being defined. Always text.

ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: string

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Record<string, unknown>

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: "json_schema"

The type of response format being defined. Always json_schema.

description?: string

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: boolean | null

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: "json_object"

The type of response format being defined. Always json_object.

verbosity?: "low" | "medium" | "high" | null

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
top_logprobs?: number | null

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

minimum0
maximum20
truncation?: "auto" | "disabled" | null

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Accepts one of the following:
"auto"
"disabled"
usage?: ResponseUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Deprecateduser?: string

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

sequence_number: number

The sequence number of this event.

type: "response.incomplete"

The type of the event. Always response.incomplete.

ResponseOutputItemAddedEvent { item, output_index, sequence_number, type }

Emitted when a new output item is added.

The output item that was added.

Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

output_index: number

The index of the output item that was added.

sequence_number: number

The sequence number of this event.

type: "response.output_item.added"

The type of the event. Always response.output_item.added.

ResponseOutputItemDoneEvent { item, output_index, sequence_number, type }

Emitted when an output item is marked done.

The output item that was marked done.

Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

output_index: number

The index of the output item that was marked done.

sequence_number: number

The sequence number of this event.

type: "response.output_item.done"

The type of the event. Always response.output_item.done.

ResponseReasoningSummaryPartAddedEvent { item_id, output_index, part, 3 more }

Emitted when a new reasoning summary part is added.

item_id: string

The ID of the item this summary part is associated with.

output_index: number

The index of the output item this summary part is associated with.

part: Part { text, type }

The summary part that was added.

text: string

The text of the summary part.

type: "summary_text"

The type of the summary part. Always summary_text.

sequence_number: number

The sequence number of this event.

summary_index: number

The index of the summary part within the reasoning summary.

type: "response.reasoning_summary_part.added"

The type of the event. Always response.reasoning_summary_part.added.

ResponseReasoningSummaryPartDoneEvent { item_id, output_index, part, 3 more }

Emitted when a reasoning summary part is completed.

item_id: string

The ID of the item this summary part is associated with.

output_index: number

The index of the output item this summary part is associated with.

part: Part { text, type }

The completed summary part.

text: string

The text of the summary part.

type: "summary_text"

The type of the summary part. Always summary_text.

sequence_number: number

The sequence number of this event.

summary_index: number

The index of the summary part within the reasoning summary.

type: "response.reasoning_summary_part.done"

The type of the event. Always response.reasoning_summary_part.done.

ResponseReasoningSummaryTextDeltaEvent { delta, item_id, output_index, 3 more }

Emitted when a delta is added to a reasoning summary text.

delta: string

The text delta that was added to the summary.

item_id: string

The ID of the item this summary text delta is associated with.

output_index: number

The index of the output item this summary text delta is associated with.

sequence_number: number

The sequence number of this event.

summary_index: number

The index of the summary part within the reasoning summary.

type: "response.reasoning_summary_text.delta"

The type of the event. Always response.reasoning_summary_text.delta.

ResponseReasoningSummaryTextDoneEvent { item_id, output_index, sequence_number, 3 more }

Emitted when a reasoning summary text is completed.

item_id: string

The ID of the item this summary text is associated with.

output_index: number

The index of the output item this summary text is associated with.

sequence_number: number

The sequence number of this event.

summary_index: number

The index of the summary part within the reasoning summary.

text: string

The full text of the completed reasoning summary.

type: "response.reasoning_summary_text.done"

The type of the event. Always response.reasoning_summary_text.done.

ResponseReasoningTextDeltaEvent { content_index, delta, item_id, 3 more }

Emitted when a delta is added to a reasoning text.

content_index: number

The index of the reasoning content part this delta is associated with.

delta: string

The text delta that was added to the reasoning content.

item_id: string

The ID of the item this reasoning text delta is associated with.

output_index: number

The index of the output item this reasoning text delta is associated with.

sequence_number: number

The sequence number of this event.

type: "response.reasoning_text.delta"

The type of the event. Always response.reasoning_text.delta.

ResponseReasoningTextDoneEvent { content_index, item_id, output_index, 3 more }

Emitted when a reasoning text is completed.

content_index: number

The index of the reasoning content part.

item_id: string

The ID of the item this reasoning text is associated with.

output_index: number

The index of the output item this reasoning text is associated with.

sequence_number: number

The sequence number of this event.

text: string

The full text of the completed reasoning content.

type: "response.reasoning_text.done"

The type of the event. Always response.reasoning_text.done.

ResponseRefusalDeltaEvent { content_index, delta, item_id, 3 more }

Emitted when there is a partial refusal text.

content_index: number

The index of the content part that the refusal text is added to.

delta: string

The refusal text that is added.

item_id: string

The ID of the output item that the refusal text is added to.

output_index: number

The index of the output item that the refusal text is added to.

sequence_number: number

The sequence number of this event.

type: "response.refusal.delta"

The type of the event. Always response.refusal.delta.

ResponseRefusalDoneEvent { content_index, item_id, output_index, 3 more }

Emitted when refusal text is finalized.

content_index: number

The index of the content part that the refusal text is finalized.

item_id: string

The ID of the output item that the refusal text is finalized.

output_index: number

The index of the output item that the refusal text is finalized.

refusal: string

The refusal text that is finalized.

sequence_number: number

The sequence number of this event.

type: "response.refusal.done"

The type of the event. Always response.refusal.done.

ResponseTextDeltaEvent { content_index, delta, item_id, 4 more }

Emitted when there is an additional text delta.

content_index: number

The index of the content part that the text delta was added to.

delta: string

The text delta that was added.

item_id: string

The ID of the output item that the text delta was added to.

logprobs: Array<Logprob>

The log probabilities of the tokens in the delta.

token: string

A possible text token.

logprob: number

The log probability of this token.

top_logprobs?: Array<TopLogprob>

The log probability of the top 20 most likely tokens.

token?: string

A possible text token.

logprob?: number

The log probability of this token.

output_index: number

The index of the output item that the text delta was added to.

sequence_number: number

The sequence number for this event.

type: "response.output_text.delta"

The type of the event. Always response.output_text.delta.

ResponseTextDoneEvent { content_index, item_id, logprobs, 4 more }

Emitted when text content is finalized.

content_index: number

The index of the content part that the text content is finalized.

item_id: string

The ID of the output item that the text content is finalized.

logprobs: Array<Logprob>

The log probabilities of the tokens in the delta.

token: string

A possible text token.

logprob: number

The log probability of this token.

top_logprobs?: Array<TopLogprob>

The log probability of the top 20 most likely tokens.

token?: string

A possible text token.

logprob?: number

The log probability of this token.

output_index: number

The index of the output item that the text content is finalized.

sequence_number: number

The sequence number for this event.

text: string

The text content that is finalized.

type: "response.output_text.done"

The type of the event. Always response.output_text.done.

ResponseWebSearchCallCompletedEvent { item_id, output_index, sequence_number, type }

Emitted when a web search call is completed.

item_id: string

Unique ID for the output item associated with the web search call.

output_index: number

The index of the output item that the web search call is associated with.

sequence_number: number

The sequence number of the web search call being processed.

type: "response.web_search_call.completed"

The type of the event. Always response.web_search_call.completed.

ResponseWebSearchCallInProgressEvent { item_id, output_index, sequence_number, type }

Emitted when a web search call is initiated.

item_id: string

Unique ID for the output item associated with the web search call.

output_index: number

The index of the output item that the web search call is associated with.

sequence_number: number

The sequence number of the web search call being processed.

type: "response.web_search_call.in_progress"

The type of the event. Always response.web_search_call.in_progress.

ResponseWebSearchCallSearchingEvent { item_id, output_index, sequence_number, type }

Emitted when a web search call is executing.

item_id: string

Unique ID for the output item associated with the web search call.

output_index: number

The index of the output item that the web search call is associated with.

sequence_number: number

The sequence number of the web search call being processed.

type: "response.web_search_call.searching"

The type of the event. Always response.web_search_call.searching.

ResponseImageGenCallCompletedEvent { item_id, output_index, sequence_number, type }

Emitted when an image generation tool call has completed and the final image is available.

item_id: string

The unique identifier of the image generation item being processed.

output_index: number

The index of the output item in the response's output array.

sequence_number: number

The sequence number of this event.

type: "response.image_generation_call.completed"

The type of the event. Always 'response.image_generation_call.completed'.

ResponseImageGenCallGeneratingEvent { item_id, output_index, sequence_number, type }

Emitted when an image generation tool call is actively generating an image (intermediate state).

item_id: string

The unique identifier of the image generation item being processed.

output_index: number

The index of the output item in the response's output array.

sequence_number: number

The sequence number of the image generation item being processed.

type: "response.image_generation_call.generating"

The type of the event. Always 'response.image_generation_call.generating'.

ResponseImageGenCallInProgressEvent { item_id, output_index, sequence_number, type }

Emitted when an image generation tool call is in progress.

item_id: string

The unique identifier of the image generation item being processed.

output_index: number

The index of the output item in the response's output array.

sequence_number: number

The sequence number of the image generation item being processed.

type: "response.image_generation_call.in_progress"

The type of the event. Always 'response.image_generation_call.in_progress'.

ResponseImageGenCallPartialImageEvent { item_id, output_index, partial_image_b64, 3 more }

Emitted when a partial image is available during image generation streaming.

item_id: string

The unique identifier of the image generation item being processed.

output_index: number

The index of the output item in the response's output array.

partial_image_b64: string

Base64-encoded partial image data, suitable for rendering as an image.

partial_image_index: number

0-based index for the partial image (backend is 1-based, but this is 0-based for the user).

sequence_number: number

The sequence number of the image generation item being processed.

type: "response.image_generation_call.partial_image"

The type of the event. Always 'response.image_generation_call.partial_image'.

ResponseMcpCallArgumentsDeltaEvent { delta, item_id, output_index, 2 more }

Emitted when there is a delta (partial update) to the arguments of an MCP tool call.

delta: string

A JSON string containing the partial update to the arguments for the MCP tool call.

item_id: string

The unique identifier of the MCP tool call item being processed.

output_index: number

The index of the output item in the response's output array.

sequence_number: number

The sequence number of this event.

type: "response.mcp_call_arguments.delta"

The type of the event. Always 'response.mcp_call_arguments.delta'.

ResponseMcpCallArgumentsDoneEvent { arguments, item_id, output_index, 2 more }

Emitted when the arguments for an MCP tool call are finalized.

arguments: string

A JSON string containing the finalized arguments for the MCP tool call.

item_id: string

The unique identifier of the MCP tool call item being processed.

output_index: number

The index of the output item in the response's output array.

sequence_number: number

The sequence number of this event.

type: "response.mcp_call_arguments.done"

The type of the event. Always 'response.mcp_call_arguments.done'.

ResponseMcpCallCompletedEvent { item_id, output_index, sequence_number, type }

Emitted when an MCP tool call has completed successfully.

item_id: string

The ID of the MCP tool call item that completed.

output_index: number

The index of the output item that completed.

sequence_number: number

The sequence number of this event.

type: "response.mcp_call.completed"

The type of the event. Always 'response.mcp_call.completed'.

ResponseMcpCallFailedEvent { item_id, output_index, sequence_number, type }

Emitted when an MCP tool call has failed.

item_id: string

The ID of the MCP tool call item that failed.

output_index: number

The index of the output item that failed.

sequence_number: number

The sequence number of this event.

type: "response.mcp_call.failed"

The type of the event. Always 'response.mcp_call.failed'.

ResponseMcpCallInProgressEvent { item_id, output_index, sequence_number, type }

Emitted when an MCP tool call is in progress.

item_id: string

The unique identifier of the MCP tool call item being processed.

output_index: number

The index of the output item in the response's output array.

sequence_number: number

The sequence number of this event.

type: "response.mcp_call.in_progress"

The type of the event. Always 'response.mcp_call.in_progress'.

ResponseMcpListToolsCompletedEvent { item_id, output_index, sequence_number, type }

Emitted when the list of available MCP tools has been successfully retrieved.

item_id: string

The ID of the MCP tool call item that produced this output.

output_index: number

The index of the output item that was processed.

sequence_number: number

The sequence number of this event.

type: "response.mcp_list_tools.completed"

The type of the event. Always 'response.mcp_list_tools.completed'.

ResponseMcpListToolsFailedEvent { item_id, output_index, sequence_number, type }

Emitted when the attempt to list available MCP tools has failed.

item_id: string

The ID of the MCP tool call item that failed.

output_index: number

The index of the output item that failed.

sequence_number: number

The sequence number of this event.

type: "response.mcp_list_tools.failed"

The type of the event. Always 'response.mcp_list_tools.failed'.

ResponseMcpListToolsInProgressEvent { item_id, output_index, sequence_number, type }

Emitted when the system is in the process of retrieving the list of available MCP tools.

item_id: string

The ID of the MCP tool call item that is being processed.

output_index: number

The index of the output item that is being processed.

sequence_number: number

The sequence number of this event.

type: "response.mcp_list_tools.in_progress"

The type of the event. Always 'response.mcp_list_tools.in_progress'.

ResponseOutputTextAnnotationAddedEvent { annotation, annotation_index, content_index, 4 more }

Emitted when an annotation is added to output text content.

annotation: unknown

The annotation object being added. (See annotation schema for details.)

annotation_index: number

The index of the annotation within the content part.

content_index: number

The index of the content part within the output item.

item_id: string

The unique identifier of the item to which the annotation is being added.

output_index: number

The index of the output item in the response's output array.

sequence_number: number

The sequence number of this event.

type: "response.output_text.annotation.added"

The type of the event. Always 'response.output_text.annotation.added'.

ResponseQueuedEvent { response, sequence_number, type }

Emitted when a response is queued and waiting to be processed.

response: Response { id, created_at, error, 29 more }

The full response object that is queued.

id: string

Unique identifier for this Response.

created_at: number

Unix timestamp (in seconds) of when this Response was created.

error: ResponseError { code, message } | null

An error object returned when the model fails to generate a Response.

code: "server_error" | "rate_limit_exceeded" | "invalid_prompt" | 15 more

The error code for the response.

Accepts one of the following:
"server_error"
"rate_limit_exceeded"
"invalid_prompt"
"vector_store_timeout"
"invalid_image"
"invalid_image_format"
"invalid_base64_image"
"invalid_image_url"
"image_too_large"
"image_too_small"
"image_parse_error"
"image_content_policy_violation"
"invalid_image_mode"
"image_file_too_large"
"unsupported_image_media_type"
"empty_image_file"
"failed_to_download_image"
"image_file_not_found"
message: string

A human-readable description of the error.

incomplete_details: IncompleteDetails | null

Details about why the response is incomplete.

reason?: "max_output_tokens" | "content_filter"

The reason why the response is incomplete.

Accepts one of the following:
"max_output_tokens"
"content_filter"
instructions: string | Array<ResponseInputItem> | null

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Accepts one of the following:
string
EasyInputMessage { content, role, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: string | ResponseInputMessageContentList { , , }

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
string
ResponseInputMessageContentList = Array<ResponseInputContent>

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

Message { content, role, status, type }

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

role: "user" | "system" | "developer"

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status?: "in_progress" | "completed" | "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type?: "message"

The type of the message input. Always set to message.

ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ComputerCallOutput { call_id, output, type, 3 more }

The output of a computer tool call.

call_id: string

The ID of the computer tool call that produced the output.

maxLength64
minLength1
output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id?: string

The identifier of an uploaded file that contains the screenshot.

image_url?: string

The URL of the screenshot image.

type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

id?: string | null

The ID of the computer tool call output.

acknowledged_safety_checks?: Array<AcknowledgedSafetyCheck> | null

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
FunctionCallOutput { call_id, output, type, 2 more }

The output of a function tool call.

call_id: string

The unique ID of the function tool call generated by the model.

maxLength64
minLength1
output: string | ResponseFunctionCallOutputItemList { , , }

Text, image, or file output of the function tool call.

Accepts one of the following:
string
ResponseFunctionCallOutputItemList = Array<ResponseFunctionCallOutputItem>

An array of content outputs (text, image, file) for the function tool call.

Accepts one of the following:
ResponseInputTextContent { text, type }

A text input to the model.

text: string

The text input to the model.

maxLength10485760
type: "input_text"

The type of the input item. Always input_text.

ResponseInputImageContent { type, detail, file_id, image_url }

An image input to the model. Learn about image inputs

type: "input_image"

The type of the input item. Always input_image.

detail?: "low" | "high" | "auto" | null

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
ResponseInputFileContent { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string | null

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string | null

The URL of the file to be sent to the model.

filename?: string | null

The name of the file to be sent to the model.

type: "function_call_output"

The type of the function tool call output. Always function_call_output.

id?: string | null

The unique ID of the function tool call output. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItemParam { encrypted_content, type, id }

A compaction item generated by the v1/responses/compact API.

encrypted_content: string

The encrypted content of the compaction summary.

maxLength10485760
type: "compaction"

The type of the item. Always compaction.

id?: string | null

The ID of the compaction item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCall { action, call_id, type, 2 more }

A tool representing a request to execute one or more shell commands.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>

Ordered shell commands for the execution environment to run.

max_output_length?: number | null

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms?: number | null

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: "shell_call"

The type of the item. Always shell_call.

id?: string | null

The unique ID of the shell tool call. Populated when this item is returned via API.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ShellCallOutput { call_id, output, type, 3 more }

The streamed output items emitted by a shell tool call.

call_id: string

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
output: Array<ResponseFunctionShellCallOutputContent { outcome, stderr, stdout } >

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Timeout { type } | Exit { exit_code, type }

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

The exit code returned by the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

Captured stderr output for the shell call.

maxLength10485760
stdout: string

Captured stdout output for the shell call.

maxLength10485760
type: "shell_call_output"

The type of the item. Always shell_call_output.

id?: string | null

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length?: number | null

The maximum number of UTF-8 characters captured for this shell call's combined output.

status?: "in_progress" | "completed" | "incomplete" | null

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ApplyPatchCall { call_id, operation, status, 2 more }

A tool call representing a request to create, delete, or update files using diff patches.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction for creating a new file via the apply_patch tool.

diff: string

Unified diff content to apply when creating the file.

maxLength10485760
path: string

Path of the file to create relative to the workspace root.

minLength1
type: "create_file"

The operation type. Always create_file.

DeleteFile { path, type }

Instruction for deleting an existing file via the apply_patch tool.

path: string

Path of the file to delete relative to the workspace root.

minLength1
type: "delete_file"

The operation type. Always delete_file.

UpdateFile { diff, path, type }

Instruction for updating an existing file via the apply_patch tool.

diff: string

Unified diff content to apply to the existing file.

maxLength10485760
path: string

Path of the file to update relative to the workspace root.

minLength1
type: "update_file"

The operation type. Always update_file.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

id?: string | null

The unique ID of the apply patch tool call. Populated when this item is returned via API.

ApplyPatchCallOutput { call_id, status, type, 2 more }

The streamed output emitted by an apply patch tool call.

call_id: string

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

id?: string | null

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output?: string | null

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse { approval_request_id, approve, type, 2 more }

A response to an MCP approval request.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

id?: string | null

The unique ID of the approval response

reason?: string | null

Optional reason for the decision.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
ResponseCustomToolCallOutput { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string | Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
string
Array<ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } >
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id?: string

The unique ID of the custom tool call output in the OpenAI platform.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

ItemReference { id, type }

An internal identifier for an item to reference.

id: string

The ID of the item to reference.

type?: "item_reference" | null

The type of item to reference. Always item_reference.

metadata: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Accepts one of the following:
(string & {})
ChatModel = "gpt-5.2" | "gpt-5.2-2025-12-11" | "gpt-5.2-chat-latest" | 69 more
Accepts one of the following:
"gpt-5.2"
"gpt-5.2-2025-12-11"
"gpt-5.2-chat-latest"
"gpt-5.2-pro"
"gpt-5.2-pro-2025-12-11"
"gpt-5.1"
"gpt-5.1-2025-11-13"
"gpt-5.1-codex"
"gpt-5.1-mini"
"gpt-5.1-chat-latest"
"gpt-5"
"gpt-5-mini"
"gpt-5-nano"
"gpt-5-2025-08-07"
"gpt-5-mini-2025-08-07"
"gpt-5-nano-2025-08-07"
"gpt-5-chat-latest"
"gpt-4.1"
"gpt-4.1-mini"
"gpt-4.1-nano"
"gpt-4.1-2025-04-14"
"gpt-4.1-mini-2025-04-14"
"gpt-4.1-nano-2025-04-14"
"o4-mini"
"o4-mini-2025-04-16"
"o3"
"o3-2025-04-16"
"o3-mini"
"o3-mini-2025-01-31"
"o1"
"o1-2024-12-17"
"o1-preview"
"o1-preview-2024-09-12"
"o1-mini"
"o1-mini-2024-09-12"
"gpt-4o"
"gpt-4o-2024-11-20"
"gpt-4o-2024-08-06"
"gpt-4o-2024-05-13"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-audio-preview-2025-06-03"
"gpt-4o-mini-audio-preview"
"gpt-4o-mini-audio-preview-2024-12-17"
"gpt-4o-search-preview"
"gpt-4o-mini-search-preview"
"gpt-4o-search-preview-2025-03-11"
"gpt-4o-mini-search-preview-2025-03-11"
"chatgpt-4o-latest"
"codex-mini-latest"
"gpt-4o-mini"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-0125-preview"
"gpt-4-turbo-preview"
"gpt-4-1106-preview"
"gpt-4-vision-preview"
"gpt-4"
"gpt-4-0314"
"gpt-4-0613"
"gpt-4-32k"
"gpt-4-32k-0314"
"gpt-4-32k-0613"
"gpt-3.5-turbo"
"gpt-3.5-turbo-16k"
"gpt-3.5-turbo-0301"
"gpt-3.5-turbo-0613"
"gpt-3.5-turbo-1106"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo-16k-0613"
"o1-pro" | "o1-pro-2025-03-19" | "o3-pro" | 11 more
"o1-pro"
"o1-pro-2025-03-19"
"o3-pro"
"o3-pro-2025-06-10"
"o3-deep-research"
"o3-deep-research-2025-06-26"
"o4-mini-deep-research"
"o4-mini-deep-research-2025-06-26"
"computer-use-preview"
"computer-use-preview-2025-03-11"
"gpt-5-codex"
"gpt-5-pro"
"gpt-5-pro-2025-10-06"
"gpt-5.1-codex-max"
object: "response"

The object type of this resource - always set to response.

output: Array<ResponseOutputItem>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
Accepts one of the following:
ResponseOutputMessage { id, content, role, 2 more }

An output message from the model.

id: string

The unique ID of the output message.

content: Array<ResponseOutputText { annotations, text, type, logprobs } | ResponseOutputRefusal { refusal, type } >

The content of the output message.

Accepts one of the following:
ResponseOutputText { annotations, text, type, logprobs }

A text output from the model.

annotations: Array<FileCitation { file_id, filename, index, type } | URLCitation { end_index, start_index, title, 2 more } | ContainerFileCitation { container_id, end_index, file_id, 3 more } | FilePath { file_id, index, type } >

The annotations of the text output.

Accepts one of the following:
FileCitation { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

logprobs?: Array<Logprob>
token: string
bytes: Array<number>
logprob: number
top_logprobs: Array<TopLogprob>
token: string
bytes: Array<number>
logprob: number
ResponseOutputRefusal { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

role: "assistant"

The role of the output message. Always assistant.

status: "in_progress" | "completed" | "incomplete"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the output message. Always message.

ResponseFileSearchToolCall { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: Array<string>

The queries used to search for files.

status: "in_progress" | "searching" | "completed" | 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results?: Array<Result> | null

The results of the file search tool call.

attributes?: Record<string, string | number | boolean> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
string
number
boolean
file_id?: string

The unique ID of the file.

filename?: string

The name of the file.

score?: number

The relevance score of the file - a value between 0 and 1.

formatfloat
text?: string

The text that was retrieved from the file.

ResponseFunctionToolCall { arguments, call_id, name, 3 more }

A tool call to run a function. See the function calling guide for more information.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

type: "function_call"

The type of the function tool call. Always function_call.

id?: string

The unique ID of the function tool call.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
ResponseComputerToolCall { id, action, call_id, 3 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

action: Click { button, type, x, y } | DoubleClick { type, x, y } | Drag { path, type } | 6 more

A click action.

Accepts one of the following:
Click { button, type, x, y }

A click action.

button: "left" | "right" | "wheel" | 2 more

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: "click"

Specifies the event type. For a click action, this property is always click.

x: number

The x-coordinate where the click occurred.

y: number

The y-coordinate where the click occurred.

DoubleClick { type, x, y }

A double click action.

type: "double_click"

Specifies the event type. For a double click action, this property is always set to double_click.

x: number

The x-coordinate where the double click occurred.

y: number

The y-coordinate where the double click occurred.

Drag { path, type }

A drag action.

path: Array<Path>

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: number

The x-coordinate.

y: number

The y-coordinate.

type: "drag"

Specifies the event type. For a drag action, this property is always set to drag.

Keypress { keys, type }

A collection of keypresses the model would like to perform.

keys: Array<string>

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: "keypress"

Specifies the event type. For a keypress action, this property is always set to keypress.

Move { type, x, y }

A mouse move action.

type: "move"

Specifies the event type. For a move action, this property is always set to move.

x: number

The x-coordinate to move to.

y: number

The y-coordinate to move to.

Screenshot { type }

A screenshot action.

type: "screenshot"

Specifies the event type. For a screenshot action, this property is always set to screenshot.

Scroll { scroll_x, scroll_y, type, 2 more }

A scroll action.

scroll_x: number

The horizontal scroll distance.

scroll_y: number

The vertical scroll distance.

type: "scroll"

Specifies the event type. For a scroll action, this property is always set to scroll.

x: number

The x-coordinate where the scroll occurred.

y: number

The y-coordinate where the scroll occurred.

Type { text, type }

An action to type in text.

text: string

The text to type.

type: "type"

Specifies the event type. For a type action, this property is always set to type.

Wait { type }

A wait action.

type: "wait"

Specifies the event type. For a wait action, this property is always set to wait.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: Array<PendingSafetyCheck>

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code?: string | null

The type of the pending safety check.

message?: string | null

Details about the pending safety check.

status: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

ResponseReasoningItem { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: Array<Summary>

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content?: Array<Content>

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content?: string | null

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status?: "in_progress" | "completed" | "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
ResponseCompactionItem { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by?: string

The identifier of the actor that created the item.

ImageGenerationCall { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string | null

The generated image encoded in base64.

status: "in_progress" | "completed" | "generating" | "failed"

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ResponseCodeInterpreterToolCall { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string | null

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: Array<Logs { logs, type } | Image { type, url } > | null

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
Logs { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" | "completed" | "incomplete" | 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: Action { command, env, type, 3 more }

Execute a shell command on the server.

command: Array<string>

The command to run.

env: Record<string, string>

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms?: number | null

Optional timeout in milliseconds for the command.

user?: string | null

Optional user to run the command as.

working_directory?: string | null

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

ResponseFunctionShellToolCall { id, action, call_id, 3 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: Action { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: Array<string>
max_output_length: number | null

Optional maximum number of characters to return from each command.

timeout_ms: number | null

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseFunctionShellToolCallOutput { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number | null

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: Array<Output>

An array of shell call output contents

outcome: Timeout { type } | Exit { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

Accepts one of the following:
Timeout { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by?: string

The identifier of the actor that created the item.

status: "in_progress" | "completed" | "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by?: string

The identifier of the actor that created the item.

ResponseApplyPatchToolCall { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: CreateFile { diff, path, type } | DeleteFile { path, type } | UpdateFile { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

Accepts one of the following:
CreateFile { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" | "completed"

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by?: string

The ID of the entity that created this tool call.

ResponseApplyPatchToolCallOutput { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" | "failed"

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by?: string

The ID of the entity that created this tool call output.

output?: string | null

Optional textual output returned by the apply patch tool.

McpCall { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error?: string | null

The error from the tool call, if any.

output?: string | null

The output from the tool call.

status?: "in_progress" | "completed" | "incomplete" | 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
McpListTools { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error?: string | null

Error message if the server could not list tools.

McpApprovalRequest { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ResponseCustomToolCall { call_id, input, name, 2 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id?: string

The unique ID of the custom tool call in the OpenAI platform.

parallel_tool_calls: boolean

Whether to allow the model to run tool calls in parallel.

temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: ToolChoiceOptions | ToolChoiceAllowed { mode, tools, type } | ToolChoiceTypes { type } | 5 more

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

Accepts one of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
ToolChoiceAllowed { mode, tools, type }

Constrains the tools available to the model to a pre-defined set.

mode: "auto" | "required"

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: Array<Record<string, unknown>>

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: "allowed_tools"

Allowed tool configuration type. Always allowed_tools.

ToolChoiceTypes { type }

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: "file_search" | "web_search_preview" | "computer_use_preview" | 3 more

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

ToolChoiceCustom { name, type }

Use this option to force the model to call a specific custom tool.

name: string

The name of the custom tool to call.

type: "custom"

For custom tool calling, the type is always custom.

ToolChoiceApplyPatch { type }

Forces the model to call the apply_patch tool when executing a tool call.

type: "apply_patch"

The tool to call. Always apply_patch.

ToolChoiceShell { type }

Forces the model to call the shell tool when a tool call is required.

type: "shell"

The tool to call. Always shell.

tools: Array<Tool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Accepts one of the following:
FunctionTool { name, parameters, strict, 2 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: Record<string, unknown> | null

A JSON schema object describing the parameters of the function.

strict: boolean | null

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

description?: string | null

A description of the function. Used by the model to determine whether or not to call the function.

FileSearchTool { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: Array<string>

The IDs of the vector stores to search.

filters?: ComparisonFilter { key, type, value } | CompoundFilter { filters, type } | null

A filter to apply.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
CompoundFilter { filters, type }

Combine multiple filters using and or or.

filters: Array<ComparisonFilter { key, type, value } | unknown>

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
ComparisonFilter { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" | "ne" | "gt" | 3 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: string | number | boolean | Array<string | number>

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
string
number
boolean
Array<string | number>
string
number
unknown
type: "and" | "or"

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results?: number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: RankingOptions { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker?: "auto" | "default-2024-11-15"

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold?: number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

ComputerTool { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" | "mac" | "linux" | 2 more

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearchTool { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" | "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters?: Filters | null

Filters for the search.

allowed_domains?: Array<string> | null

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The approximate location of the user.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: "approximate"

The type of location approximation. Always approximate.

Mcp { server_label, type, allowed_tools, 6 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

Accepts one of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

Accepts one of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string | CodeInterpreterToolAuto { type, file_ids, memory_limit }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
string
CodeInterpreterToolAuto { type, file_ids, memory_limit }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids?: Array<string>

An optional list of uploaded files to make available to your code.

memory_limit?: "1g" | "4g" | "16g" | "64g" | null

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action?: "generate" | "edit" | "auto"

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background?: "transparent" | "opaque" | "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity?: "high" | "low" | null

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask?: InputImageMask { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: string

File ID for the mask image.

image_url?: string

Base64-encoded mask image.

model?: (string & {}) | "gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
(string & {})
"gpt-image-1" | "gpt-image-1-mini" | "gpt-image-1.5"
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation?: "auto" | "low"

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression?: number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format?: "png" | "webp" | "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images?: number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality?: "low" | "medium" | "high" | "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size?: "1024x1024" | "1024x1536" | "1536x1024" | "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

FunctionShellTool { type }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

CustomTool { name, type, description, format }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

description?: string

Optional description of the custom tool, used to provide more context.

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
Text { type }

Unconstrained free-form text.

type: "text"

Unconstrained text format. Always text.

Grammar { definition, syntax, type }

A grammar defined by the user.

definition: string

The grammar definition.

syntax: "lark" | "regex"

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: "grammar"

Grammar format. Always grammar.

WebSearchPreviewTool { type, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" | "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size?: "low" | "medium" | "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location?: UserLocation | null

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city?: string | null

Free text input for the city of the user, e.g. San Francisco.

country?: string | null

The two-letter ISO country code of the user, e.g. US.

region?: string | null

Free text input for the region of the user, e.g. California.

timezone?: string | null

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatchTool { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
background?: boolean | null

Whether to run the model response in the background. Learn more.

completed_at?: number | null

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

conversation?: Conversation | null

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: string

The unique ID of the conversation that this response was associated with.

max_output_tokens?: number | null

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: number | null

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

previous_response_id?: string | null

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

id: string

The unique identifier of the prompt template to use.

variables?: Record<string, string | ResponseInputText { text, type } | ResponseInputImage { detail, type, file_id, image_url } | ResponseInputFile { type, file_data, file_id, 2 more } > | null

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" | "high" | "auto"

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: "input_image"

The type of the input item. Always input_image.

file_id?: string | null

The ID of the file to be sent to the model.

image_url?: string | null

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data?: string

The content of the file to be sent to the model.

file_id?: string | null

The ID of the file to be sent to the model.

file_url?: string

The URL of the file to be sent to the model.

filename?: string

The name of the file to be sent to the model.

version?: string | null

Optional version of the prompt template.

prompt_cache_key?: string

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: "in-memory" | "24h" | null

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

Accepts one of the following:
"in-memory"
"24h"
reasoning?: Reasoning { effort, generate_summary, summary } | null

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary?: "auto" | "concise" | "detailed" | null

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary?: "auto" | "concise" | "detailed" | null

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
safety_identifier?: string

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier?: "auto" | "default" | "flex" | 2 more | null

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

Accepts one of the following:
"auto"
"default"
"flex"
"scale"
"priority"

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

Accepts one of the following:
"completed"
"failed"
"in_progress"
"cancelled"
"queued"
"incomplete"
text?: ResponseTextConfig { format, verbosity }

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
ResponseFormatText { type }

Default response format. Used to generate text responses.

type: "text"

The type of response format being defined. Always text.

ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: string

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Record<string, unknown>

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: "json_schema"

The type of response format being defined. Always json_schema.

description?: string

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: boolean | null

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: "json_object"

The type of response format being defined. Always json_object.

verbosity?: "low" | "medium" | "high" | null

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
top_logprobs?: number | null

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

minimum0
maximum20
truncation?: "auto" | "disabled" | null

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Accepts one of the following:
"auto"
"disabled"
usage?: ResponseUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Deprecateduser?: string

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

sequence_number: number

The sequence number for this event.

type: "response.queued"

The type of the event. Always 'response.queued'.

ResponseCustomToolCallInputDeltaEvent { delta, item_id, output_index, 2 more }

Event representing a delta (partial update) to the input of a custom tool call.

delta: string

The incremental input data (delta) for the custom tool call.

item_id: string

Unique identifier for the API item associated with this event.

output_index: number

The index of the output this delta applies to.

sequence_number: number

The sequence number of this event.

type: "response.custom_tool_call_input.delta"

The event type identifier.

ResponseCustomToolCallInputDoneEvent { input, item_id, output_index, 2 more }

Event indicating that input for a custom tool call is complete.

input: string

The complete input data for the custom tool call.

item_id: string

Unique identifier for the API item associated with this event.

output_index: number

The index of the output this event applies to.

sequence_number: number

The sequence number of this event.

type: "response.custom_tool_call_input.done"

The event type identifier.

Get a model response

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});

const response = await client.responses.retrieve('resp_677efb5139a88190b512bc3fef8e535d');

console.log(response.id);
{
  "id": "id",
  "created_at": 0,
  "error": {
    "code": "server_error",
    "message": "message"
  },
  "incomplete_details": {
    "reason": "max_output_tokens"
  },
  "instructions": "string",
  "metadata": {
    "foo": "string"
  },
  "model": "gpt-5.1",
  "object": "response",
  "output": [
    {
      "id": "id",
      "content": [
        {
          "annotations": [
            {
              "file_id": "file_id",
              "filename": "filename",
              "index": 0,
              "type": "file_citation"
            }
          ],
          "text": "text",
          "type": "output_text",
          "logprobs": [
            {
              "token": "token",
              "bytes": [
                0
              ],
              "logprob": 0,
              "top_logprobs": [
                {
                  "token": "token",
                  "bytes": [
                    0
                  ],
                  "logprob": 0
                }
              ]
            }
          ]
        }
      ],
      "role": "assistant",
      "status": "in_progress",
      "type": "message"
    }
  ],
  "parallel_tool_calls": true,
  "temperature": 1,
  "tool_choice": "none",
  "tools": [
    {
      "name": "name",
      "parameters": {
        "foo": "bar"
      },
      "strict": true,
      "type": "function",
      "description": "description"
    }
  ],
  "top_p": 1,
  "background": true,
  "completed_at": 0,
  "conversation": {
    "id": "id"
  },
  "max_output_tokens": 0,
  "max_tool_calls": 0,
  "output_text": "output_text",
  "previous_response_id": "previous_response_id",
  "prompt": {
    "id": "id",
    "variables": {
      "foo": "string"
    },
    "version": "version"
  },
  "prompt_cache_key": "prompt-cache-key-1234",
  "prompt_cache_retention": "in-memory",
  "reasoning": {
    "effort": "none",
    "generate_summary": "auto",
    "summary": "auto"
  },
  "safety_identifier": "safety-identifier-1234",
  "service_tier": "auto",
  "status": "completed",
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "low"
  },
  "top_logprobs": 0,
  "truncation": "auto",
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  },
  "user": "user-1234"
}
Returns Examples
{
  "id": "id",
  "created_at": 0,
  "error": {
    "code": "server_error",
    "message": "message"
  },
  "incomplete_details": {
    "reason": "max_output_tokens"
  },
  "instructions": "string",
  "metadata": {
    "foo": "string"
  },
  "model": "gpt-5.1",
  "object": "response",
  "output": [
    {
      "id": "id",
      "content": [
        {
          "annotations": [
            {
              "file_id": "file_id",
              "filename": "filename",
              "index": 0,
              "type": "file_citation"
            }
          ],
          "text": "text",
          "type": "output_text",
          "logprobs": [
            {
              "token": "token",
              "bytes": [
                0
              ],
              "logprob": 0,
              "top_logprobs": [
                {
                  "token": "token",
                  "bytes": [
                    0
                  ],
                  "logprob": 0
                }
              ]
            }
          ]
        }
      ],
      "role": "assistant",
      "status": "in_progress",
      "type": "message"
    }
  ],
  "parallel_tool_calls": true,
  "temperature": 1,
  "tool_choice": "none",
  "tools": [
    {
      "name": "name",
      "parameters": {
        "foo": "bar"
      },
      "strict": true,
      "type": "function",
      "description": "description"
    }
  ],
  "top_p": 1,
  "background": true,
  "completed_at": 0,
  "conversation": {
    "id": "id"
  },
  "max_output_tokens": 0,
  "max_tool_calls": 0,
  "output_text": "output_text",
  "previous_response_id": "previous_response_id",
  "prompt": {
    "id": "id",
    "variables": {
      "foo": "string"
    },
    "version": "version"
  },
  "prompt_cache_key": "prompt-cache-key-1234",
  "prompt_cache_retention": "in-memory",
  "reasoning": {
    "effort": "none",
    "generate_summary": "auto",
    "summary": "auto"
  },
  "safety_identifier": "safety-identifier-1234",
  "service_tier": "auto",
  "status": "completed",
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "low"
  },
  "top_logprobs": 0,
  "truncation": "auto",
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  },
  "user": "user-1234"
}