Responses
Create a model response
Get a model response
Delete a model response
Cancel a response
Compact a response
ModelsExpand Collapse
CompactedResponse = object { id, created_at, object, 2 more }
output: array of Message { id, content, role, 2 more } or object { arguments, call_id, name, 4 more } or object { id, arguments, call_id, 4 more } or 22 moreThe compacted list of output items.
The compacted list of output items.
Message = object { id, content, role, 2 more } A message to or from the model.
A message to or from the model.
content: array of ResponseInputText { text, type } or ResponseOutputText { annotations, logprobs, text, type } or TextContent { text, type } or 6 moreThe content of the message
The content of the message
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ComputerScreenshotContent = object { detail, file_id, image_url, type } A screenshot of a computer.
A screenshot of a computer.
role: "unknown" or "user" or "assistant" or 5 moreThe role of the message. One of unknown, user, assistant, system, critic, discriminator, developer, or tool.
The role of the message. One of unknown, user, assistant, system, critic, discriminator, developer, or tool.
FunctionCall = object { arguments, call_id, name, 4 more } A tool call to run a function. See the
function calling guide for more information.
A tool call to run a function. See the function calling guide for more information.
ToolSearchCall = object { id, arguments, call_id, 4 more }
ToolSearchOutput = object { id, call_id, execution, 4 more }
status: "in_progress" or "completed" or "incomplete"The status of the tool search output item that was recorded.
The status of the tool search output item that was recorded.
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreThe loaded tool definitions returned by tool search.
The loaded tool definitions returned by tool search.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
FunctionCallOutput = object { call_id, output, type, 2 more } The output of a function tool call.
The output of a function tool call.
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the function call generated by your code.
Can be a string or an list of output content.
The output from the function call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the function call.
Text, image, or file output of the function call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
FileSearchCall = object { id, queries, status, 2 more } The results of a file search tool call. See the
file search guide for more information.
The results of a file search tool call. See the file search guide for more information.
status: "in_progress" or "searching" or "completed" or 2 moreThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
results: optional array of object { attributes, file_id, filename, 2 more } The results of the file search tool call.
The results of the file search tool call.
attributes: optional map[string or number or boolean]Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
WebSearchCall = object { id, action, status, type } The results of a web search tool call. See the
web search guide for more information.
The results of a web search tool call. See the web search guide for more information.
action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url } An object describing the specific action taken in this web search call.
Includes details on how the model used the web (search, open_page, find_in_page).
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).
ImageGenerationCall = object { id, result, status, type } An image generation request made by the model.
An image generation request made by the model.
ComputerCall = object { id, call_id, pending_safety_checks, 4 more } A tool call to a computer use tool. See the
computer use guide for more information.
A tool call to a computer use tool. See the computer use guide for more information.
pending_safety_checks: array of object { id, code, message } The pending safety checks for the computer call.
The pending safety checks for the computer call.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
ComputerCallOutput = object { id, call_id, output, 4 more }
status: "completed" or "incomplete" or "failed" or "in_progress"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
Reasoning = object { id, summary, type, 3 more } A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Compaction = object { id, encrypted_content, type, created_by } A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
CodeInterpreterCall = object { id, code, container_id, 3 more } A tool call to run code.
A tool call to run code.
outputs: array of object { logs, type } or object { type, url } The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
LocalShellCall = object { id, action, call_id, 2 more } A tool call to run a command on the local shell.
A tool call to run a command on the local shell.
LocalShellCallOutput = object { id, output, type, status } The output of a local shell tool call.
The output of a local shell tool call.
ShellCall = object { id, action, call_id, 4 more } A tool call that executes one or more shell commands in a managed environment.
A tool call that executes one or more shell commands in a managed environment.
action: object { commands, max_output_length, timeout_ms } The shell commands and limits that describe how to run the tool call.
The shell commands and limits that describe how to run the tool call.
Represents the use of a local environment to perform shell actions.
Represents the use of a local environment to perform shell actions.
ShellCallOutput = object { id, call_id, max_output_length, 4 more } The output of a shell tool call that was emitted.
The output of a shell tool call that was emitted.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
output: array of object { outcome, stderr, stdout, created_by } An array of shell call output contents
An array of shell call output contents
ApplyPatchCall = object { id, call_id, operation, 3 more } A tool call that applies file diffs by creating, deleting, or updating files.
A tool call that applies file diffs by creating, deleting, or updating files.
operation: object { diff, path, type } or object { path, type } or object { diff, path, type } One of the create_file, delete_file, or update_file operations applied via apply_patch.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
ApplyPatchCallOutput = object { id, call_id, status, 3 more } The output emitted by an apply patch tool call.
The output emitted by an apply patch tool call.
McpListTools = object { id, server_label, tools, 2 more } A list of tools available on an MCP server.
A list of tools available on an MCP server.
McpApprovalRequest = object { id, arguments, name, 2 more } A request for human approval of a tool invocation.
A request for human approval of a tool invocation.
McpApprovalResponse = object { id, approval_request_id, approve, 2 more } A response to an MCP approval request.
A response to an MCP approval request.
McpCall = object { id, arguments, name, 6 more } An invocation of a tool on an MCP server.
An invocation of a tool on an MCP server.
CustomToolCall = object { call_id, input, name, 3 more } A call to a custom tool created by the model.
A call to a custom tool created by the model.
CustomToolCallOutput = object { call_id, output, type, id } The output of a custom tool call from your code, being sent back to the model.
The output of a custom tool call from your code, being sent back to the model.
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the custom tool call generated by your code.
Can be a string or an list of output content.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the custom tool call.
Text, image, or file output of the custom tool call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ComputerAction = object { button, type, x, 2 more } or object { keys, type, x, y } or object { path, type, keys } or 6 moreA click action.
A click action.
Click = object { button, type, x, 2 more } A click action.
A click action.
Drag = object { path, type, keys } A drag action.
A drag action.
Flattened batched actions for computer_use. Each action includes an
type discriminator and action-specific fields.
Flattened batched actions for computer_use. Each action includes an
type discriminator and action-specific fields.
Click = object { button, type, x, 2 more } A click action.
A click action.
Drag = object { path, type, keys } A drag action.
A drag action.
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
EasyInputMessage = object { content, role, phase, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content
types.
A list of one or many input items to the model, containing different content types.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: "user" or "assistant" or "system" or "developer"The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Response = object { id, created_at, error, 30 more }
instructions: string or array of EasyInputMessage { content, role, phase, type } or object { content, role, status, type } or ResponseOutputMessage { id, content, role, 3 more } or 25 moreA system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
InputItemList = array of EasyInputMessage { content, role, phase, type } or object { content, role, status, type } or ResponseOutputMessage { id, content, role, 3 more } or 25 moreA list of one or many input items to the model, containing
different content types.
A list of one or many input items to the model, containing different content types.
EasyInputMessage = object { content, role, phase, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content
types.
A list of one or many input items to the model, containing different content types.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: "user" or "assistant" or "system" or "developer"The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Message = object { content, role, status, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
role: "user" or "system" or "developer"The role of the message input. One of user, system, or developer.
The role of the message input. One of user, system, or developer.
ResponseOutputMessage = object { id, content, role, 3 more } An output message from the model.
An output message from the model.
content: array of ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } The content of the output message.
The content of the output message.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
status: "in_progress" or "completed" or "incomplete"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
FileSearchCall = object { id, queries, status, 2 more } The results of a file search tool call. See the
file search guide for more information.
The results of a file search tool call. See the file search guide for more information.
status: "in_progress" or "searching" or "completed" or 2 moreThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
results: optional array of object { attributes, file_id, filename, 2 more } The results of the file search tool call.
The results of the file search tool call.
attributes: optional map[string or number or boolean]Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
ComputerCall = object { id, call_id, pending_safety_checks, 4 more } A tool call to a computer use tool. See the
computer use guide for more information.
A tool call to a computer use tool. See the computer use guide for more information.
pending_safety_checks: array of object { id, code, message } The pending safety checks for the computer call.
The pending safety checks for the computer call.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
ComputerCallOutput = object { call_id, output, type, 3 more } The output of a computer tool call.
The output of a computer tool call.
WebSearchCall = object { id, action, status, type } The results of a web search tool call. See the
web search guide for more information.
The results of a web search tool call. See the web search guide for more information.
action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url } An object describing the specific action taken in this web search call.
Includes details on how the model used the web (search, open_page, find_in_page).
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).
FunctionCall = object { arguments, call_id, name, 4 more } A tool call to run a function. See the
function calling guide for more information.
A tool call to run a function. See the function calling guide for more information.
FunctionCallOutput = object { call_id, output, type, 2 more } The output of a function tool call.
The output of a function tool call.
The unique ID of the function tool call generated by the model.
output: string or array of ResponseInputTextContent { text, type } or ResponseInputImageContent { type, detail, file_id, image_url } or ResponseInputFileContent { type, file_data, file_id, 2 more } Text, image, or file output of the function tool call.
Text, image, or file output of the function tool call.
array of ResponseInputTextContent { text, type } or ResponseInputImageContent { type, detail, file_id, image_url } or ResponseInputFileContent { type, file_data, file_id, 2 more } An array of content outputs (text, image, file) for the function tool call.
An array of content outputs (text, image, file) for the function tool call.
ResponseInputImageContent = object { type, detail, file_id, image_url } An image input to the model. Learn about image inputs
An image input to the model. Learn about image inputs
ToolSearchCall = object { arguments, type, id, 3 more }
ToolSearchOutput = object { tools, type, id, 3 more }
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreThe loaded tool definitions returned by the tool search output.
The loaded tool definitions returned by the tool search output.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
The unique ID of the tool search call generated by the model.
Reasoning = object { id, summary, type, 3 more } A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Compaction = object { encrypted_content, type, id } A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
ImageGenerationCall = object { id, result, status, type } An image generation request made by the model.
An image generation request made by the model.
CodeInterpreterCall = object { id, code, container_id, 3 more } A tool call to run code.
A tool call to run code.
outputs: array of object { logs, type } or object { type, url } The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
LocalShellCall = object { id, action, call_id, 2 more } A tool call to run a command on the local shell.
A tool call to run a command on the local shell.
LocalShellCallOutput = object { id, output, type, status } The output of a local shell tool call.
The output of a local shell tool call.
ShellCall = object { action, call_id, type, 3 more } A tool representing a request to execute one or more shell commands.
A tool representing a request to execute one or more shell commands.
action: object { commands, max_output_length, timeout_ms } The shell commands and limits that describe how to run the tool call.
The shell commands and limits that describe how to run the tool call.
The unique ID of the shell tool call. Populated when this item is returned via API.
environment: optional LocalEnvironment { type, skills } or ContainerReference { container_id, type } The environment to execute the shell commands in.
The environment to execute the shell commands in.
ShellCallOutput = object { call_id, output, type, 3 more } The streamed output items emitted by a shell tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The unique ID of the shell tool call output. Populated when this item is returned via API.
ApplyPatchCall = object { call_id, operation, status, 2 more } A tool call representing a request to create, delete, or update files using diff patches.
A tool call representing a request to create, delete, or update files using diff patches.
The unique ID of the apply patch tool call generated by the model.
operation: object { diff, path, type } or object { path, type } or object { diff, path, type } The specific create, delete, or update instruction for the apply_patch tool call.
The specific create, delete, or update instruction for the apply_patch tool call.
CreateFile = object { diff, path, type } Instruction for creating a new file via the apply_patch tool.
Instruction for creating a new file via the apply_patch tool.
DeleteFile = object { path, type } Instruction for deleting an existing file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
ApplyPatchCallOutput = object { call_id, status, type, 2 more } The streamed output emitted by an apply patch tool call.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call generated by the model.
status: "completed" or "failed"The status of the apply patch tool call output. One of completed or failed.
The status of the apply patch tool call output. One of completed or failed.
McpListTools = object { id, server_label, tools, 2 more } A list of tools available on an MCP server.
A list of tools available on an MCP server.
McpApprovalRequest = object { id, arguments, name, 2 more } A request for human approval of a tool invocation.
A request for human approval of a tool invocation.
McpApprovalResponse = object { approval_request_id, approve, type, 2 more } A response to an MCP approval request.
A response to an MCP approval request.
McpCall = object { id, arguments, name, 6 more } An invocation of a tool on an MCP server.
An invocation of a tool on an MCP server.
CustomToolCallOutput = object { call_id, output, type, id } The output of a custom tool call from your code, being sent back to the model.
The output of a custom tool call from your code, being sent back to the model.
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the custom tool call generated by your code.
Can be a string or an list of output content.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the custom tool call.
Text, image, or file output of the custom tool call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
An array of content items generated by the model.
- The length and order of items in the
output array is dependent
on the model's response.
- Rather than accessing the first item in the
output array and
assuming it's an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
An array of content items generated by the model.
- The length and order of items in the
outputarray is dependent on the model's response. - Rather than accessing the first item in the
outputarray and assuming it's anassistantmessage with the content generated by the model, you might consider using theoutput_textproperty where supported in SDKs.
ResponseOutputMessage = object { id, content, role, 3 more } An output message from the model.
An output message from the model.
content: array of ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } The content of the output message.
The content of the output message.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
status: "in_progress" or "completed" or "incomplete"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
FileSearchCall = object { id, queries, status, 2 more } The results of a file search tool call. See the
file search guide for more information.
The results of a file search tool call. See the file search guide for more information.
status: "in_progress" or "searching" or "completed" or 2 moreThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
results: optional array of object { attributes, file_id, filename, 2 more } The results of the file search tool call.
The results of the file search tool call.
attributes: optional map[string or number or boolean]Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
FunctionCall = object { arguments, call_id, name, 4 more } A tool call to run a function. See the
function calling guide for more information.
A tool call to run a function. See the function calling guide for more information.
FunctionCallOutput = object { id, call_id, output, 3 more }
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the function call generated by your code.
Can be a string or an list of output content.
The output from the function call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the function call.
Text, image, or file output of the function call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
WebSearchCall = object { id, action, status, type } The results of a web search tool call. See the
web search guide for more information.
The results of a web search tool call. See the web search guide for more information.
action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url } An object describing the specific action taken in this web search call.
Includes details on how the model used the web (search, open_page, find_in_page).
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).
ComputerCall = object { id, call_id, pending_safety_checks, 4 more } A tool call to a computer use tool. See the
computer use guide for more information.
A tool call to a computer use tool. See the computer use guide for more information.
pending_safety_checks: array of object { id, code, message } The pending safety checks for the computer call.
The pending safety checks for the computer call.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
ComputerCallOutput = object { id, call_id, output, 4 more }
status: "completed" or "incomplete" or "failed" or "in_progress"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
Reasoning = object { id, summary, type, 3 more } A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
ToolSearchCall = object { id, arguments, call_id, 4 more }
ToolSearchOutput = object { id, call_id, execution, 4 more }
status: "in_progress" or "completed" or "incomplete"The status of the tool search output item that was recorded.
The status of the tool search output item that was recorded.
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreThe loaded tool definitions returned by tool search.
The loaded tool definitions returned by tool search.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Compaction = object { id, encrypted_content, type, created_by } A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
ImageGenerationCall = object { id, result, status, type } An image generation request made by the model.
An image generation request made by the model.
CodeInterpreterCall = object { id, code, container_id, 3 more } A tool call to run code.
A tool call to run code.
outputs: array of object { logs, type } or object { type, url } The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
LocalShellCall = object { id, action, call_id, 2 more } A tool call to run a command on the local shell.
A tool call to run a command on the local shell.
LocalShellCallOutput = object { id, output, type, status } The output of a local shell tool call.
The output of a local shell tool call.
ShellCall = object { id, action, call_id, 4 more } A tool call that executes one or more shell commands in a managed environment.
A tool call that executes one or more shell commands in a managed environment.
action: object { commands, max_output_length, timeout_ms } The shell commands and limits that describe how to run the tool call.
The shell commands and limits that describe how to run the tool call.
Represents the use of a local environment to perform shell actions.
Represents the use of a local environment to perform shell actions.
ShellCallOutput = object { id, call_id, max_output_length, 4 more } The output of a shell tool call that was emitted.
The output of a shell tool call that was emitted.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
output: array of object { outcome, stderr, stdout, created_by } An array of shell call output contents
An array of shell call output contents
ApplyPatchCall = object { id, call_id, operation, 3 more } A tool call that applies file diffs by creating, deleting, or updating files.
A tool call that applies file diffs by creating, deleting, or updating files.
operation: object { diff, path, type } or object { path, type } or object { diff, path, type } One of the create_file, delete_file, or update_file operations applied via apply_patch.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
ApplyPatchCallOutput = object { id, call_id, status, 3 more } The output emitted by an apply patch tool call.
The output emitted by an apply patch tool call.
McpCall = object { id, arguments, name, 6 more } An invocation of a tool on an MCP server.
An invocation of a tool on an MCP server.
McpListTools = object { id, server_label, tools, 2 more } A list of tools available on an MCP server.
A list of tools available on an MCP server.
McpApprovalRequest = object { id, arguments, name, 2 more } A request for human approval of a tool invocation.
A request for human approval of a tool invocation.
McpApprovalResponse = object { id, approval_request_id, approve, 2 more } A response to an MCP approval request.
A response to an MCP approval request.
CustomToolCall = object { call_id, input, name, 3 more } A call to a custom tool created by the model.
A call to a custom tool created by the model.
CustomToolCallOutput = object { id, call_id, output, 3 more }
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the custom tool call generated by your code.
Can be a string or an list of output content.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the custom tool call.
Text, image, or file output of the custom tool call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
tool_choice: ToolChoiceOptions or ToolChoiceAllowed { mode, tools, type } or ToolChoiceTypes { type } or 5 moreHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
ToolChoiceOptions = "none" or "auto" or "required"Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
ToolChoiceAllowed = object { mode, tools, type } Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
mode: "auto" or "required"Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
ToolChoiceTypes = object { type } Indicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
type: "file_search" or "web_search_preview" or "computer" or 5 moreThe type of hosted tool the model should to use. Learn more about
built-in tools.
Allowed values are:
file_search
web_search_preview
computer
computer_use_preview
computer_use
code_interpreter
image_generation
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputercomputer_use_previewcomputer_usecode_interpreterimage_generation
ToolChoiceFunction = object { name, type } Use this option to force the model to call a specific function.
Use this option to force the model to call a specific function.
ToolChoiceMcp = object { server_label, type, name } Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific tool on a remote MCP server.
ToolChoiceCustom = object { name, type } Use this option to force the model to call a specific custom tool.
Use this option to force the model to call a specific custom tool.
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the
model's capabilities, like web search
or file search. Learn more about
built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers
or predefined connectors such as Google Drive and SharePoint. Learn more about
MCP Tools.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code with strongly typed arguments
and outputs. Learn more about
function calling. You can also use
custom tools to call your own code.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Whether to run the model response in the background. Learn more.
Unix timestamp (in seconds) of when this Response was completed.
Only present when the status is completed.
conversation: optional object { id } The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.
The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
prompt_cache_retention: optional "in-memory" or "24h"The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
service_tier: optional "auto" or "default" or "flex" or 2 moreSpecifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
truncation: optional "auto" or "disabled"The truncation strategy to use for the model response.
auto: If the input to this Response exceeds
the model's context window size, the model will truncate the
response to fit the context window by dropping items from the beginning of the conversation.
disabled (default): If the input size will exceed the context window
size for a model, the request will fail with a 400 error.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
ResponseContent = ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } or 3 moreMulti-modal input and output contents.
Multi-modal input and output contents.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseContentPartAddedEvent = object { content_index, item_id, output_index, 3 more } Emitted when a new content part is added.
Emitted when a new content part is added.
part: ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } or object { text, type } The content part that was added.
The content part that was added.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseContentPartDoneEvent = object { content_index, item_id, output_index, 3 more } Emitted when a content part is done.
Emitted when a content part is done.
part: ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } or object { text, type } The content part that is done.
The content part that is done.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseFormatTextConfig = ResponseFormatText { type } or ResponseFormatTextJSONSchemaConfig { name, schema, type, 2 more } or ResponseFormatJSONObject { type } An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
ResponseFormatTextJSONSchemaConfig = object { name, schema, type, 2 more } JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
ResponseFormatTextJSONSchemaConfig = object { name, schema, type, 2 more } JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
ResponseFunctionShellCallOutputContent = object { outcome, stderr, stdout } Captured stdout and stderr for a portion of a shell tool call output.
Captured stdout and stderr for a portion of a shell tool call output.
ResponseIncludable = "file_search_call.results" or "web_search_call.results" or "web_search_call.action.sources" or 5 moreSpecify additional output data to include in the model response. Currently supported values are:
web_search_call.action.sources: Include the sources of the web search tool call.
code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.
computer_call_output.output.image_url: Include image urls from the computer call output.
file_search_call.results: Include the search results of the file search tool call.
message.input_image.image_url: Include image urls from the input message.
message.output_text.logprobs: Include logprobs with assistant messages.
reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).
Specify additional output data to include in the model response. Currently supported values are:
web_search_call.action.sources: Include the sources of the web search tool call.code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.computer_call_output.output.image_url: Include image urls from the computer call output.file_search_call.results: Include the search results of the file search tool call.message.input_image.image_url: Include image urls from the input message.message.output_text.logprobs: Include logprobs with assistant messages.reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when thestoreparameter is set tofalse, or when an organization is enrolled in the zero data retention program).
ResponseInputContent = ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } A text input to the model.
A text input to the model.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ResponseInputImageContent = object { type, detail, file_id, image_url } An image input to the model. Learn about image inputs
An image input to the model. Learn about image inputs
A list of one or many input items to the model, containing different content
types.
A list of one or many input items to the model, containing different content types.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ResponseOutputItem = ResponseOutputMessage { id, content, role, 3 more } or object { id, queries, status, 2 more } or object { arguments, call_id, name, 4 more } or 22 moreAn output message from the model.
An output message from the model.
ResponseOutputMessage = object { id, content, role, 3 more } An output message from the model.
An output message from the model.
content: array of ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } The content of the output message.
The content of the output message.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
status: "in_progress" or "completed" or "incomplete"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
FileSearchCall = object { id, queries, status, 2 more } The results of a file search tool call. See the
file search guide for more information.
The results of a file search tool call. See the file search guide for more information.
status: "in_progress" or "searching" or "completed" or 2 moreThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
results: optional array of object { attributes, file_id, filename, 2 more } The results of the file search tool call.
The results of the file search tool call.
attributes: optional map[string or number or boolean]Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
FunctionCall = object { arguments, call_id, name, 4 more } A tool call to run a function. See the
function calling guide for more information.
A tool call to run a function. See the function calling guide for more information.
FunctionCallOutput = object { id, call_id, output, 3 more }
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the function call generated by your code.
Can be a string or an list of output content.
The output from the function call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the function call.
Text, image, or file output of the function call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
WebSearchCall = object { id, action, status, type } The results of a web search tool call. See the
web search guide for more information.
The results of a web search tool call. See the web search guide for more information.
action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url } An object describing the specific action taken in this web search call.
Includes details on how the model used the web (search, open_page, find_in_page).
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).
ComputerCall = object { id, call_id, pending_safety_checks, 4 more } A tool call to a computer use tool. See the
computer use guide for more information.
A tool call to a computer use tool. See the computer use guide for more information.
pending_safety_checks: array of object { id, code, message } The pending safety checks for the computer call.
The pending safety checks for the computer call.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
ComputerCallOutput = object { id, call_id, output, 4 more }
status: "completed" or "incomplete" or "failed" or "in_progress"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
Reasoning = object { id, summary, type, 3 more } A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
ToolSearchCall = object { id, arguments, call_id, 4 more }
ToolSearchOutput = object { id, call_id, execution, 4 more }
status: "in_progress" or "completed" or "incomplete"The status of the tool search output item that was recorded.
The status of the tool search output item that was recorded.
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreThe loaded tool definitions returned by tool search.
The loaded tool definitions returned by tool search.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Compaction = object { id, encrypted_content, type, created_by } A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
ImageGenerationCall = object { id, result, status, type } An image generation request made by the model.
An image generation request made by the model.
CodeInterpreterCall = object { id, code, container_id, 3 more } A tool call to run code.
A tool call to run code.
outputs: array of object { logs, type } or object { type, url } The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
LocalShellCall = object { id, action, call_id, 2 more } A tool call to run a command on the local shell.
A tool call to run a command on the local shell.
LocalShellCallOutput = object { id, output, type, status } The output of a local shell tool call.
The output of a local shell tool call.
ShellCall = object { id, action, call_id, 4 more } A tool call that executes one or more shell commands in a managed environment.
A tool call that executes one or more shell commands in a managed environment.
action: object { commands, max_output_length, timeout_ms } The shell commands and limits that describe how to run the tool call.
The shell commands and limits that describe how to run the tool call.
Represents the use of a local environment to perform shell actions.
Represents the use of a local environment to perform shell actions.
ShellCallOutput = object { id, call_id, max_output_length, 4 more } The output of a shell tool call that was emitted.
The output of a shell tool call that was emitted.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
output: array of object { outcome, stderr, stdout, created_by } An array of shell call output contents
An array of shell call output contents
ApplyPatchCall = object { id, call_id, operation, 3 more } A tool call that applies file diffs by creating, deleting, or updating files.
A tool call that applies file diffs by creating, deleting, or updating files.
operation: object { diff, path, type } or object { path, type } or object { diff, path, type } One of the create_file, delete_file, or update_file operations applied via apply_patch.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
ApplyPatchCallOutput = object { id, call_id, status, 3 more } The output emitted by an apply patch tool call.
The output emitted by an apply patch tool call.
McpCall = object { id, arguments, name, 6 more } An invocation of a tool on an MCP server.
An invocation of a tool on an MCP server.
McpListTools = object { id, server_label, tools, 2 more } A list of tools available on an MCP server.
A list of tools available on an MCP server.
McpApprovalRequest = object { id, arguments, name, 2 more } A request for human approval of a tool invocation.
A request for human approval of a tool invocation.
McpApprovalResponse = object { id, approval_request_id, approve, 2 more } A response to an MCP approval request.
A response to an MCP approval request.
CustomToolCall = object { call_id, input, name, 3 more } A call to a custom tool created by the model.
A call to a custom tool created by the model.
CustomToolCallOutput = object { id, call_id, output, 3 more }
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the custom tool call generated by your code.
Can be a string or an list of output content.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the custom tool call.
Text, image, or file output of the custom tool call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
ResponseOutputMessage = object { id, content, role, 3 more } An output message from the model.
An output message from the model.
content: array of ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } The content of the output message.
The content of the output message.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
status: "in_progress" or "completed" or "incomplete"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponsePrompt = object { id, variables, version } Reference to a prompt template and its variables.
Learn more.
Reference to a prompt template and its variables. Learn more.
variables: optional map[string or ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } ]Optional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ResponseStreamEvent = ResponseAudioDeltaEvent { delta, sequence_number, type } or ResponseAudioDoneEvent { sequence_number, type } or ResponseAudioTranscriptDeltaEvent { delta, sequence_number, type } or 50 moreEmitted when there is a partial audio response.
Emitted when there is a partial audio response.
ResponseAudioDeltaEvent = object { delta, sequence_number, type } Emitted when there is a partial audio response.
Emitted when there is a partial audio response.
ResponseAudioDoneEvent = object { sequence_number, type } Emitted when the audio response is complete.
Emitted when the audio response is complete.
ResponseAudioTranscriptDeltaEvent = object { delta, sequence_number, type } Emitted when there is a partial transcript of audio.
Emitted when there is a partial transcript of audio.
ResponseAudioTranscriptDoneEvent = object { sequence_number, type } Emitted when the full audio transcript is completed.
Emitted when the full audio transcript is completed.
ResponseCodeInterpreterCallCodeDeltaEvent = object { delta, item_id, output_index, 2 more } Emitted when a partial code snippet is streamed by the code interpreter.
Emitted when a partial code snippet is streamed by the code interpreter.
ResponseCodeInterpreterCallCodeDoneEvent = object { code, item_id, output_index, 2 more } Emitted when the code snippet is finalized by the code interpreter.
Emitted when the code snippet is finalized by the code interpreter.
ResponseCodeInterpreterCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when the code interpreter call is completed.
Emitted when the code interpreter call is completed.
ResponseCodeInterpreterCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when a code interpreter call is in progress.
Emitted when a code interpreter call is in progress.
ResponseCodeInterpreterCallInterpretingEvent = object { item_id, output_index, sequence_number, type } Emitted when the code interpreter is actively interpreting the code snippet.
Emitted when the code interpreter is actively interpreting the code snippet.
ResponseCompletedEvent = object { response, sequence_number, type } Emitted when the model response is complete.
Emitted when the model response is complete.
ResponseContentPartAddedEvent = object { content_index, item_id, output_index, 3 more } Emitted when a new content part is added.
Emitted when a new content part is added.
part: ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } or object { text, type } The content part that was added.
The content part that was added.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseContentPartDoneEvent = object { content_index, item_id, output_index, 3 more } Emitted when a content part is done.
Emitted when a content part is done.
part: ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } or object { text, type } The content part that is done.
The content part that is done.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseCreatedEvent = object { response, sequence_number, type } An event that is emitted when a response is created.
An event that is emitted when a response is created.
ResponseFileSearchCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when a file search call is completed (results found).
Emitted when a file search call is completed (results found).
ResponseFileSearchCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when a file search call is initiated.
Emitted when a file search call is initiated.
ResponseFileSearchCallSearchingEvent = object { item_id, output_index, sequence_number, type } Emitted when a file search is currently searching.
Emitted when a file search is currently searching.
ResponseFunctionCallArgumentsDeltaEvent = object { delta, item_id, output_index, 2 more } Emitted when there is a partial function-call arguments delta.
Emitted when there is a partial function-call arguments delta.
ResponseFunctionCallArgumentsDoneEvent = object { arguments, item_id, name, 3 more } Emitted when function-call arguments are finalized.
Emitted when function-call arguments are finalized.
ResponseInProgressEvent = object { response, sequence_number, type } Emitted when the response is in progress.
Emitted when the response is in progress.
ResponseFailedEvent = object { response, sequence_number, type } An event that is emitted when a response fails.
An event that is emitted when a response fails.
ResponseIncompleteEvent = object { response, sequence_number, type } An event that is emitted when a response finishes as incomplete.
An event that is emitted when a response finishes as incomplete.
ResponseOutputItemAddedEvent = object { item, output_index, sequence_number, type } Emitted when a new output item is added.
Emitted when a new output item is added.
ResponseOutputItemDoneEvent = object { item, output_index, sequence_number, type } Emitted when an output item is marked done.
Emitted when an output item is marked done.
ResponseReasoningSummaryPartAddedEvent = object { item_id, output_index, part, 3 more } Emitted when a new reasoning summary part is added.
Emitted when a new reasoning summary part is added.
ResponseReasoningSummaryPartDoneEvent = object { item_id, output_index, part, 3 more } Emitted when a reasoning summary part is completed.
Emitted when a reasoning summary part is completed.
ResponseReasoningSummaryTextDeltaEvent = object { delta, item_id, output_index, 3 more } Emitted when a delta is added to a reasoning summary text.
Emitted when a delta is added to a reasoning summary text.
ResponseReasoningSummaryTextDoneEvent = object { item_id, output_index, sequence_number, 3 more } Emitted when a reasoning summary text is completed.
Emitted when a reasoning summary text is completed.
ResponseReasoningTextDeltaEvent = object { content_index, delta, item_id, 3 more } Emitted when a delta is added to a reasoning text.
Emitted when a delta is added to a reasoning text.
ResponseReasoningTextDoneEvent = object { content_index, item_id, output_index, 3 more } Emitted when a reasoning text is completed.
Emitted when a reasoning text is completed.
ResponseRefusalDeltaEvent = object { content_index, delta, item_id, 3 more } Emitted when there is a partial refusal text.
Emitted when there is a partial refusal text.
ResponseRefusalDoneEvent = object { content_index, item_id, output_index, 3 more } Emitted when refusal text is finalized.
Emitted when refusal text is finalized.
ResponseTextDeltaEvent = object { content_index, delta, item_id, 4 more } Emitted when there is an additional text delta.
Emitted when there is an additional text delta.
ResponseTextDoneEvent = object { content_index, item_id, logprobs, 4 more } Emitted when text content is finalized.
Emitted when text content is finalized.
ResponseWebSearchCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when a web search call is completed.
Emitted when a web search call is completed.
ResponseWebSearchCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when a web search call is initiated.
Emitted when a web search call is initiated.
ResponseWebSearchCallSearchingEvent = object { item_id, output_index, sequence_number, type } Emitted when a web search call is executing.
Emitted when a web search call is executing.
ResponseImageGenCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when an image generation tool call has completed and the final image is available.
Emitted when an image generation tool call has completed and the final image is available.
ResponseImageGenCallGeneratingEvent = object { item_id, output_index, sequence_number, type } Emitted when an image generation tool call is actively generating an image (intermediate state).
Emitted when an image generation tool call is actively generating an image (intermediate state).
ResponseImageGenCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when an image generation tool call is in progress.
Emitted when an image generation tool call is in progress.
ResponseImageGenCallPartialImageEvent = object { item_id, output_index, partial_image_b64, 3 more } Emitted when a partial image is available during image generation streaming.
Emitted when a partial image is available during image generation streaming.
ResponseMcpCallArgumentsDeltaEvent = object { delta, item_id, output_index, 2 more } Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
ResponseMcpCallArgumentsDoneEvent = object { arguments, item_id, output_index, 2 more } Emitted when the arguments for an MCP tool call are finalized.
Emitted when the arguments for an MCP tool call are finalized.
ResponseMcpCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when an MCP tool call has completed successfully.
Emitted when an MCP tool call has completed successfully.
ResponseMcpCallFailedEvent = object { item_id, output_index, sequence_number, type } Emitted when an MCP tool call has failed.
Emitted when an MCP tool call has failed.
ResponseMcpCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when an MCP tool call is in progress.
Emitted when an MCP tool call is in progress.
ResponseMcpListToolsCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when the list of available MCP tools has been successfully retrieved.
Emitted when the list of available MCP tools has been successfully retrieved.
ResponseMcpListToolsFailedEvent = object { item_id, output_index, sequence_number, type } Emitted when the attempt to list available MCP tools has failed.
Emitted when the attempt to list available MCP tools has failed.
ResponseMcpListToolsInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when the system is in the process of retrieving the list of available MCP tools.
Emitted when the system is in the process of retrieving the list of available MCP tools.
ResponseOutputTextAnnotationAddedEvent = object { annotation, annotation_index, content_index, 4 more } Emitted when an annotation is added to output text content.
Emitted when an annotation is added to output text content.
ResponseQueuedEvent = object { response, sequence_number, type } Emitted when a response is queued and waiting to be processed.
Emitted when a response is queued and waiting to be processed.
ResponseCustomToolCallInputDeltaEvent = object { delta, item_id, output_index, 2 more } Event representing a delta (partial update) to the input of a custom tool call.
Event representing a delta (partial update) to the input of a custom tool call.
ResponseTextConfig = object { format, verbosity } Configuration options for a text response from the model. Can be plain
text or structured JSON data. Learn more:
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
ResponseUsage = object { input_tokens, input_tokens_details, output_tokens, 2 more } Represents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
input_tokens_details: object { cached_tokens } A detailed breakdown of the input tokens.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
ResponsesClientEvent = object { type, background, context_management, 27 more }
Whether to run the model response in the background. Learn more.
context_management: optional array of object { type, compact_threshold } Context management configuration for this request.
Context management configuration for this request.
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
Specify additional output data to include in the model response. Currently supported values are:
web_search_call.action.sources: Include the sources of the web search tool call.
code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.
computer_call_output.output.image_url: Include image urls from the computer call output.
file_search_call.results: Include the search results of the file search tool call.
message.input_image.image_url: Include image urls from the input message.
message.output_text.logprobs: Include logprobs with assistant messages.
reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).
Specify additional output data to include in the model response. Currently supported values are:
web_search_call.action.sources: Include the sources of the web search tool call.code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.computer_call_output.output.image_url: Include image urls from the computer call output.file_search_call.results: Include the search results of the file search tool call.message.input_image.image_url: Include image urls from the input message.message.output_text.logprobs: Include logprobs with assistant messages.reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when thestoreparameter is set tofalse, or when an organization is enrolled in the zero data retention program).
input: optional string or array of EasyInputMessage { content, role, phase, type } or object { content, role, status, type } or ResponseOutputMessage { id, content, role, 3 more } or 25 moreText, image, or file inputs to the model, used to generate a response.
Learn more:
Text, image, or file inputs to the model, used to generate a response.
Learn more:
InputItemList = array of EasyInputMessage { content, role, phase, type } or object { content, role, status, type } or ResponseOutputMessage { id, content, role, 3 more } or 25 moreA list of one or many input items to the model, containing
different content types.
A list of one or many input items to the model, containing different content types.
EasyInputMessage = object { content, role, phase, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content
types.
A list of one or many input items to the model, containing different content types.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: "user" or "assistant" or "system" or "developer"The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Message = object { content, role, status, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
role: "user" or "system" or "developer"The role of the message input. One of user, system, or developer.
The role of the message input. One of user, system, or developer.
ResponseOutputMessage = object { id, content, role, 3 more } An output message from the model.
An output message from the model.
content: array of ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } The content of the output message.
The content of the output message.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
status: "in_progress" or "completed" or "incomplete"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
FileSearchCall = object { id, queries, status, 2 more } The results of a file search tool call. See the
file search guide for more information.
The results of a file search tool call. See the file search guide for more information.
status: "in_progress" or "searching" or "completed" or 2 moreThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
results: optional array of object { attributes, file_id, filename, 2 more } The results of the file search tool call.
The results of the file search tool call.
attributes: optional map[string or number or boolean]Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
ComputerCall = object { id, call_id, pending_safety_checks, 4 more } A tool call to a computer use tool. See the
computer use guide for more information.
A tool call to a computer use tool. See the computer use guide for more information.
pending_safety_checks: array of object { id, code, message } The pending safety checks for the computer call.
The pending safety checks for the computer call.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
ComputerCallOutput = object { call_id, output, type, 3 more } The output of a computer tool call.
The output of a computer tool call.
WebSearchCall = object { id, action, status, type } The results of a web search tool call. See the
web search guide for more information.
The results of a web search tool call. See the web search guide for more information.
action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url } An object describing the specific action taken in this web search call.
Includes details on how the model used the web (search, open_page, find_in_page).
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).
FunctionCall = object { arguments, call_id, name, 4 more } A tool call to run a function. See the
function calling guide for more information.
A tool call to run a function. See the function calling guide for more information.
FunctionCallOutput = object { call_id, output, type, 2 more } The output of a function tool call.
The output of a function tool call.
The unique ID of the function tool call generated by the model.
output: string or array of ResponseInputTextContent { text, type } or ResponseInputImageContent { type, detail, file_id, image_url } or ResponseInputFileContent { type, file_data, file_id, 2 more } Text, image, or file output of the function tool call.
Text, image, or file output of the function tool call.
array of ResponseInputTextContent { text, type } or ResponseInputImageContent { type, detail, file_id, image_url } or ResponseInputFileContent { type, file_data, file_id, 2 more } An array of content outputs (text, image, file) for the function tool call.
An array of content outputs (text, image, file) for the function tool call.
ResponseInputImageContent = object { type, detail, file_id, image_url } An image input to the model. Learn about image inputs
An image input to the model. Learn about image inputs
ToolSearchCall = object { arguments, type, id, 3 more }
ToolSearchOutput = object { tools, type, id, 3 more }
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreThe loaded tool definitions returned by the tool search output.
The loaded tool definitions returned by the tool search output.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
The unique ID of the tool search call generated by the model.
Reasoning = object { id, summary, type, 3 more } A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Compaction = object { encrypted_content, type, id } A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
ImageGenerationCall = object { id, result, status, type } An image generation request made by the model.
An image generation request made by the model.
CodeInterpreterCall = object { id, code, container_id, 3 more } A tool call to run code.
A tool call to run code.
outputs: array of object { logs, type } or object { type, url } The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
LocalShellCall = object { id, action, call_id, 2 more } A tool call to run a command on the local shell.
A tool call to run a command on the local shell.
LocalShellCallOutput = object { id, output, type, status } The output of a local shell tool call.
The output of a local shell tool call.
ShellCall = object { action, call_id, type, 3 more } A tool representing a request to execute one or more shell commands.
A tool representing a request to execute one or more shell commands.
action: object { commands, max_output_length, timeout_ms } The shell commands and limits that describe how to run the tool call.
The shell commands and limits that describe how to run the tool call.
The unique ID of the shell tool call. Populated when this item is returned via API.
environment: optional LocalEnvironment { type, skills } or ContainerReference { container_id, type } The environment to execute the shell commands in.
The environment to execute the shell commands in.
ShellCallOutput = object { call_id, output, type, 3 more } The streamed output items emitted by a shell tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The unique ID of the shell tool call output. Populated when this item is returned via API.
ApplyPatchCall = object { call_id, operation, status, 2 more } A tool call representing a request to create, delete, or update files using diff patches.
A tool call representing a request to create, delete, or update files using diff patches.
The unique ID of the apply patch tool call generated by the model.
operation: object { diff, path, type } or object { path, type } or object { diff, path, type } The specific create, delete, or update instruction for the apply_patch tool call.
The specific create, delete, or update instruction for the apply_patch tool call.
CreateFile = object { diff, path, type } Instruction for creating a new file via the apply_patch tool.
Instruction for creating a new file via the apply_patch tool.
DeleteFile = object { path, type } Instruction for deleting an existing file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
ApplyPatchCallOutput = object { call_id, status, type, 2 more } The streamed output emitted by an apply patch tool call.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call generated by the model.
status: "completed" or "failed"The status of the apply patch tool call output. One of completed or failed.
The status of the apply patch tool call output. One of completed or failed.
McpListTools = object { id, server_label, tools, 2 more } A list of tools available on an MCP server.
A list of tools available on an MCP server.
McpApprovalRequest = object { id, arguments, name, 2 more } A request for human approval of a tool invocation.
A request for human approval of a tool invocation.
McpApprovalResponse = object { approval_request_id, approve, type, 2 more } A response to an MCP approval request.
A response to an MCP approval request.
McpCall = object { id, arguments, name, 6 more } An invocation of a tool on an MCP server.
An invocation of a tool on an MCP server.
CustomToolCallOutput = object { call_id, output, type, id } The output of a custom tool call from your code, being sent back to the model.
The output of a custom tool call from your code, being sent back to the model.
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the custom tool call generated by your code.
Can be a string or an list of output content.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the custom tool call.
Text, image, or file output of the custom tool call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
prompt_cache_retention: optional "in-memory" or "24h"The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
service_tier: optional "auto" or "default" or "flex" or 2 moreSpecifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.
stream_options: optional object { include_obfuscation } Options for streaming responses. Only set this when you set stream: true.
Options for streaming responses. Only set this when you set stream: true.
When true, stream obfuscation will be enabled. Stream obfuscation adds
random characters to an obfuscation field on streaming delta events to
normalize payload sizes as a mitigation to certain side-channel attacks.
These obfuscation fields are included by default, but add a small amount
of overhead to the data stream. You can set include_obfuscation to
false to optimize for bandwidth if you trust the network links between
your application and the OpenAI API.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
tool_choice: optional ToolChoiceOptions or ToolChoiceAllowed { mode, tools, type } or ToolChoiceTypes { type } or 5 moreHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
ToolChoiceOptions = "none" or "auto" or "required"Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
ToolChoiceAllowed = object { mode, tools, type } Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
mode: "auto" or "required"Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
ToolChoiceTypes = object { type } Indicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
type: "file_search" or "web_search_preview" or "computer" or 5 moreThe type of hosted tool the model should to use. Learn more about
built-in tools.
Allowed values are:
file_search
web_search_preview
computer
computer_use_preview
computer_use
code_interpreter
image_generation
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputercomputer_use_previewcomputer_usecode_interpreterimage_generation
ToolChoiceFunction = object { name, type } Use this option to force the model to call a specific function.
Use this option to force the model to call a specific function.
ToolChoiceMcp = object { server_label, type, name } Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific tool on a remote MCP server.
ToolChoiceCustom = object { name, type } Use this option to force the model to call a specific custom tool.
Use this option to force the model to call a specific custom tool.
tools: optional array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the
model's capabilities, like web search
or file search. Learn more about
built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers
or predefined connectors such as Google Drive and SharePoint. Learn more about
MCP Tools.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code with strongly typed arguments
and outputs. Learn more about
function calling. You can also use
custom tools to call your own code.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
truncation: optional "auto" or "disabled"The truncation strategy to use for the model response.
auto: If the input to this Response exceeds
the model's context window size, the model will truncate the
response to fit the context window by dropping items from the beginning of the conversation.
disabled (default): If the input size will exceed the context window
size for a model, the request will fail with a 400 error.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
ResponsesServerEvent = ResponseAudioDeltaEvent { delta, sequence_number, type } or ResponseAudioDoneEvent { sequence_number, type } or ResponseAudioTranscriptDeltaEvent { delta, sequence_number, type } or 50 moreServer events emitted by the Responses WebSocket server.
Server events emitted by the Responses WebSocket server.
ResponseAudioDeltaEvent = object { delta, sequence_number, type } Emitted when there is a partial audio response.
Emitted when there is a partial audio response.
ResponseAudioDoneEvent = object { sequence_number, type } Emitted when the audio response is complete.
Emitted when the audio response is complete.
ResponseAudioTranscriptDeltaEvent = object { delta, sequence_number, type } Emitted when there is a partial transcript of audio.
Emitted when there is a partial transcript of audio.
ResponseAudioTranscriptDoneEvent = object { sequence_number, type } Emitted when the full audio transcript is completed.
Emitted when the full audio transcript is completed.
ResponseCodeInterpreterCallCodeDeltaEvent = object { delta, item_id, output_index, 2 more } Emitted when a partial code snippet is streamed by the code interpreter.
Emitted when a partial code snippet is streamed by the code interpreter.
ResponseCodeInterpreterCallCodeDoneEvent = object { code, item_id, output_index, 2 more } Emitted when the code snippet is finalized by the code interpreter.
Emitted when the code snippet is finalized by the code interpreter.
ResponseCodeInterpreterCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when the code interpreter call is completed.
Emitted when the code interpreter call is completed.
ResponseCodeInterpreterCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when a code interpreter call is in progress.
Emitted when a code interpreter call is in progress.
ResponseCodeInterpreterCallInterpretingEvent = object { item_id, output_index, sequence_number, type } Emitted when the code interpreter is actively interpreting the code snippet.
Emitted when the code interpreter is actively interpreting the code snippet.
ResponseCompletedEvent = object { response, sequence_number, type } Emitted when the model response is complete.
Emitted when the model response is complete.
ResponseContentPartAddedEvent = object { content_index, item_id, output_index, 3 more } Emitted when a new content part is added.
Emitted when a new content part is added.
part: ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } or object { text, type } The content part that was added.
The content part that was added.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseContentPartDoneEvent = object { content_index, item_id, output_index, 3 more } Emitted when a content part is done.
Emitted when a content part is done.
part: ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } or object { text, type } The content part that is done.
The content part that is done.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
ResponseCreatedEvent = object { response, sequence_number, type } An event that is emitted when a response is created.
An event that is emitted when a response is created.
ResponseFileSearchCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when a file search call is completed (results found).
Emitted when a file search call is completed (results found).
ResponseFileSearchCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when a file search call is initiated.
Emitted when a file search call is initiated.
ResponseFileSearchCallSearchingEvent = object { item_id, output_index, sequence_number, type } Emitted when a file search is currently searching.
Emitted when a file search is currently searching.
ResponseFunctionCallArgumentsDeltaEvent = object { delta, item_id, output_index, 2 more } Emitted when there is a partial function-call arguments delta.
Emitted when there is a partial function-call arguments delta.
ResponseFunctionCallArgumentsDoneEvent = object { arguments, item_id, name, 3 more } Emitted when function-call arguments are finalized.
Emitted when function-call arguments are finalized.
ResponseInProgressEvent = object { response, sequence_number, type } Emitted when the response is in progress.
Emitted when the response is in progress.
ResponseFailedEvent = object { response, sequence_number, type } An event that is emitted when a response fails.
An event that is emitted when a response fails.
ResponseIncompleteEvent = object { response, sequence_number, type } An event that is emitted when a response finishes as incomplete.
An event that is emitted when a response finishes as incomplete.
ResponseOutputItemAddedEvent = object { item, output_index, sequence_number, type } Emitted when a new output item is added.
Emitted when a new output item is added.
ResponseOutputItemDoneEvent = object { item, output_index, sequence_number, type } Emitted when an output item is marked done.
Emitted when an output item is marked done.
ResponseReasoningSummaryPartAddedEvent = object { item_id, output_index, part, 3 more } Emitted when a new reasoning summary part is added.
Emitted when a new reasoning summary part is added.
ResponseReasoningSummaryPartDoneEvent = object { item_id, output_index, part, 3 more } Emitted when a reasoning summary part is completed.
Emitted when a reasoning summary part is completed.
ResponseReasoningSummaryTextDeltaEvent = object { delta, item_id, output_index, 3 more } Emitted when a delta is added to a reasoning summary text.
Emitted when a delta is added to a reasoning summary text.
ResponseReasoningSummaryTextDoneEvent = object { item_id, output_index, sequence_number, 3 more } Emitted when a reasoning summary text is completed.
Emitted when a reasoning summary text is completed.
ResponseReasoningTextDeltaEvent = object { content_index, delta, item_id, 3 more } Emitted when a delta is added to a reasoning text.
Emitted when a delta is added to a reasoning text.
ResponseReasoningTextDoneEvent = object { content_index, item_id, output_index, 3 more } Emitted when a reasoning text is completed.
Emitted when a reasoning text is completed.
ResponseRefusalDeltaEvent = object { content_index, delta, item_id, 3 more } Emitted when there is a partial refusal text.
Emitted when there is a partial refusal text.
ResponseRefusalDoneEvent = object { content_index, item_id, output_index, 3 more } Emitted when refusal text is finalized.
Emitted when refusal text is finalized.
ResponseTextDeltaEvent = object { content_index, delta, item_id, 4 more } Emitted when there is an additional text delta.
Emitted when there is an additional text delta.
ResponseTextDoneEvent = object { content_index, item_id, logprobs, 4 more } Emitted when text content is finalized.
Emitted when text content is finalized.
ResponseWebSearchCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when a web search call is completed.
Emitted when a web search call is completed.
ResponseWebSearchCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when a web search call is initiated.
Emitted when a web search call is initiated.
ResponseWebSearchCallSearchingEvent = object { item_id, output_index, sequence_number, type } Emitted when a web search call is executing.
Emitted when a web search call is executing.
ResponseImageGenCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when an image generation tool call has completed and the final image is available.
Emitted when an image generation tool call has completed and the final image is available.
ResponseImageGenCallGeneratingEvent = object { item_id, output_index, sequence_number, type } Emitted when an image generation tool call is actively generating an image (intermediate state).
Emitted when an image generation tool call is actively generating an image (intermediate state).
ResponseImageGenCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when an image generation tool call is in progress.
Emitted when an image generation tool call is in progress.
ResponseImageGenCallPartialImageEvent = object { item_id, output_index, partial_image_b64, 3 more } Emitted when a partial image is available during image generation streaming.
Emitted when a partial image is available during image generation streaming.
ResponseMcpCallArgumentsDeltaEvent = object { delta, item_id, output_index, 2 more } Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
ResponseMcpCallArgumentsDoneEvent = object { arguments, item_id, output_index, 2 more } Emitted when the arguments for an MCP tool call are finalized.
Emitted when the arguments for an MCP tool call are finalized.
ResponseMcpCallCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when an MCP tool call has completed successfully.
Emitted when an MCP tool call has completed successfully.
ResponseMcpCallFailedEvent = object { item_id, output_index, sequence_number, type } Emitted when an MCP tool call has failed.
Emitted when an MCP tool call has failed.
ResponseMcpCallInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when an MCP tool call is in progress.
Emitted when an MCP tool call is in progress.
ResponseMcpListToolsCompletedEvent = object { item_id, output_index, sequence_number, type } Emitted when the list of available MCP tools has been successfully retrieved.
Emitted when the list of available MCP tools has been successfully retrieved.
ResponseMcpListToolsFailedEvent = object { item_id, output_index, sequence_number, type } Emitted when the attempt to list available MCP tools has failed.
Emitted when the attempt to list available MCP tools has failed.
ResponseMcpListToolsInProgressEvent = object { item_id, output_index, sequence_number, type } Emitted when the system is in the process of retrieving the list of available MCP tools.
Emitted when the system is in the process of retrieving the list of available MCP tools.
ResponseOutputTextAnnotationAddedEvent = object { annotation, annotation_index, content_index, 4 more } Emitted when an annotation is added to output text content.
Emitted when an annotation is added to output text content.
ResponseQueuedEvent = object { response, sequence_number, type } Emitted when a response is queued and waiting to be processed.
Emitted when a response is queued and waiting to be processed.
ResponseCustomToolCallInputDeltaEvent = object { delta, item_id, output_index, 2 more } Event representing a delta (partial update) to the input of a custom tool call.
Event representing a delta (partial update) to the input of a custom tool call.
ToolChoiceAllowed = object { mode, tools, type } Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
mode: "auto" or "required"Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
ToolChoiceOptions = "none" or "auto" or "required"Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
ToolChoiceTypes = object { type } Indicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
type: "file_search" or "web_search_preview" or "computer" or 5 moreThe type of hosted tool the model should to use. Learn more about
built-in tools.
Allowed values are:
file_search
web_search_preview
computer
computer_use_preview
computer_use
code_interpreter
image_generation
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputercomputer_use_previewcomputer_usecode_interpreterimage_generation
ResponsesInput Items
List input items
ModelsExpand Collapse
ResponseItemList = object { data, first_id, has_more, 2 more } A list of Response items.
A list of Response items.
data: array of ResponseInputMessageItem { id, content, role, 2 more } or ResponseOutputMessage { id, content, role, 3 more } or object { id, queries, status, 2 more } or 23 moreA list of items used to generate this response.
A list of items used to generate this response.
ResponseInputMessageItem = object { id, content, role, 2 more }
ResponseOutputMessage = object { id, content, role, 3 more } An output message from the model.
An output message from the model.
content: array of ResponseOutputText { annotations, logprobs, text, type } or ResponseOutputRefusal { refusal, type } The content of the output message.
The content of the output message.
ResponseOutputText = object { annotations, logprobs, text, type } A text output from the model.
A text output from the model.
annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type } The annotations of the text output.
The annotations of the text output.
URLCitation = object { end_index, start_index, title, 2 more } A citation for a web resource used to generate a model response.
A citation for a web resource used to generate a model response.
status: "in_progress" or "completed" or "incomplete"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
FileSearchCall = object { id, queries, status, 2 more } The results of a file search tool call. See the
file search guide for more information.
The results of a file search tool call. See the file search guide for more information.
status: "in_progress" or "searching" or "completed" or 2 moreThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
results: optional array of object { attributes, file_id, filename, 2 more } The results of the file search tool call.
The results of the file search tool call.
attributes: optional map[string or number or boolean]Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
ComputerCall = object { id, call_id, pending_safety_checks, 4 more } A tool call to a computer use tool. See the
computer use guide for more information.
A tool call to a computer use tool. See the computer use guide for more information.
pending_safety_checks: array of object { id, code, message } The pending safety checks for the computer call.
The pending safety checks for the computer call.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
ComputerCallOutput = object { id, call_id, output, 4 more }
status: "completed" or "incomplete" or "failed" or "in_progress"The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
WebSearchCall = object { id, action, status, type } The results of a web search tool call. See the
web search guide for more information.
The results of a web search tool call. See the web search guide for more information.
action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url } An object describing the specific action taken in this web search call.
Includes details on how the model used the web (search, open_page, find_in_page).
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).
FunctionCall = object { id, arguments, call_id, 5 more }
FunctionCallOutput = object { id, call_id, output, 3 more }
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the function call generated by your code.
Can be a string or an list of output content.
The output from the function call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the function call.
Text, image, or file output of the function call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
ToolSearchCall = object { id, arguments, call_id, 4 more }
ToolSearchOutput = object { id, call_id, execution, 4 more }
status: "in_progress" or "completed" or "incomplete"The status of the tool search output item that was recorded.
The status of the tool search output item that was recorded.
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 moreThe loaded tool definitions returned by tool search.
The loaded tool definitions returned by tool search.
Function = object { name, parameters, strict, 3 more } Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
FileSearch = object { type, vector_store_ids, filters, 2 more } A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter to apply.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter = object { filters, type } Combine multiple filters using and or or.
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
ComparisonFilter = object { key, type, value } A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: optional object { hybrid_search, ranker, score_threshold } Ranking options for search.
Ranking options for search.
Computer = object { type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
ComputerUsePreview = object { display_height, display_width, environment, type } A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
WebSearch = object { type, filters, search_context_size, user_location } Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: "web_search" or "web_search_2025_08_26"The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { city, country, region, 2 more } The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Mcp = object { server_label, type, allowed_tools, 7 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
CodeInterpreter = object { container, type } A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: string or object { type, file_ids, memory_limit, network_policy } The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy } Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
An optional list of uploaded files to make available to your code.
memory_limit: optional "1g" or "4g" or "16g" or "64g"The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
ImageGeneration = object { type, action, background, 9 more } A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: optional "generate" or "edit" or "auto"Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: optional "transparent" or "opaque" or "auto"Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: optional "high" or "low"Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: optional object { file_id, image_url } Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
Compression level for the output image. Default: 100.
output_format: optional "png" or "webp" or "jpeg"The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
LocalShell = object { type } A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands in a local environment.
Shell = object { type, environment } A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
An optional list of uploaded files to make available to your code.
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets } Network access policy for the container.
Network access policy for the container.
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type } An optional list of skills referenced by id or inline data.
An optional list of skills referenced by id or inline data.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
Namespace = object { description, name, tools, type } Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more } The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
Custom = object { name, type, defer_loading, 2 more } A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
ToolSearch = object { type, description, execution, parameters } Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
WebSearchPreview = object { type, search_content_types, search_context_size, user_location } This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: "web_search_preview" or "web_search_preview_2025_03_11"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: optional "low" or "medium" or "high"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: optional object { type, city, country, 2 more } The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Reasoning = object { id, summary, type, 3 more } A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Compaction = object { id, encrypted_content, type, created_by } A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
ImageGenerationCall = object { id, result, status, type } An image generation request made by the model.
An image generation request made by the model.
CodeInterpreterCall = object { id, code, container_id, 3 more } A tool call to run code.
A tool call to run code.
outputs: array of object { logs, type } or object { type, url } The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
LocalShellCall = object { id, action, call_id, 2 more } A tool call to run a command on the local shell.
A tool call to run a command on the local shell.
LocalShellCallOutput = object { id, output, type, status } The output of a local shell tool call.
The output of a local shell tool call.
ShellCall = object { id, action, call_id, 4 more } A tool call that executes one or more shell commands in a managed environment.
A tool call that executes one or more shell commands in a managed environment.
action: object { commands, max_output_length, timeout_ms } The shell commands and limits that describe how to run the tool call.
The shell commands and limits that describe how to run the tool call.
Represents the use of a local environment to perform shell actions.
Represents the use of a local environment to perform shell actions.
ShellCallOutput = object { id, call_id, max_output_length, 4 more } The output of a shell tool call that was emitted.
The output of a shell tool call that was emitted.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
output: array of object { outcome, stderr, stdout, created_by } An array of shell call output contents
An array of shell call output contents
ApplyPatchCall = object { id, call_id, operation, 3 more } A tool call that applies file diffs by creating, deleting, or updating files.
A tool call that applies file diffs by creating, deleting, or updating files.
operation: object { diff, path, type } or object { path, type } or object { diff, path, type } One of the create_file, delete_file, or update_file operations applied via apply_patch.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
ApplyPatchCallOutput = object { id, call_id, status, 3 more } The output emitted by an apply patch tool call.
The output emitted by an apply patch tool call.
McpListTools = object { id, server_label, tools, 2 more } A list of tools available on an MCP server.
A list of tools available on an MCP server.
McpApprovalRequest = object { id, arguments, name, 2 more } A request for human approval of a tool invocation.
A request for human approval of a tool invocation.
McpApprovalResponse = object { id, approval_request_id, approve, 2 more } A response to an MCP approval request.
A response to an MCP approval request.
McpCall = object { id, arguments, name, 6 more } An invocation of a tool on an MCP server.
An invocation of a tool on an MCP server.
CustomToolCall = object { id, call_id, input, 5 more }
CustomToolCallOutput = object { id, call_id, output, 3 more }
output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } The output from the custom tool call generated by your code.
Can be a string or an list of output content.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more } Text, image, or file output of the custom tool call.
Text, image, or file output of the custom tool call.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
status: "in_progress" or "completed" or "incomplete"The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.