Skip to content
Primary navigation

Conversations

Manage conversations and conversation items.

Create a conversation
POST/conversations
Retrieve a conversation
GET/conversations/{conversation_id}
Update a conversation
POST/conversations/{conversation_id}
Delete a conversation
DELETE/conversations/{conversation_id}
ModelsExpand Collapse
ComputerScreenshotContent = object { detail, file_id, image_url, type }

A screenshot of a computer.

detail: "low" or "high" or "auto" or "original"

The detail level of the screenshot image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
file_id: string

The identifier of an uploaded file that contains the screenshot.

image_url: string

The URL of the screenshot image.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

Conversation = object { id, created_at, metadata, object }
id: string

The unique ID of the conversation.

created_at: number

The time at which the conversation was created, measured in seconds since the Unix epoch.

metadata: unknown

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

object: "conversation"

The object type, which is always conversation.

ConversationDeleted = object { id, deleted, object }
id: string
deleted: boolean
object: "conversation.deleted"
ConversationDeletedResource = object { id, deleted, object }
id: string
deleted: boolean
object: "conversation.deleted"
Message = object { id, content, role, 2 more }

A message to or from the model.

id: string

The unique ID of the message.

content: array of ResponseInputText { text, type } or ResponseOutputText { annotations, logprobs, text, type } or TextContent { text, type } or 6 more

The content of the message

One of the following:
ResponseInputText = object { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseOutputText = object { annotations, logprobs, text, type }

A text output from the model.

annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type }

The annotations of the text output.

One of the following:
FileCitation = object { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation = object { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation = object { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath = object { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

logprobs: array of object { token, bytes, logprob, top_logprobs }
token: string
bytes: array of number
logprob: number
top_logprobs: array of object { token, bytes, logprob }
token: string
bytes: array of number
logprob: number
text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

TextContent = object { text, type }

A text content.

text: string
type: "text"
SummaryTextContent = object { text, type }

A summary text from the model.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

ReasoningText = object { text, type }

Reasoning text from the model.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

ResponseOutputRefusal = object { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

ResponseInputImage = object { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" or "high" or "auto" or "original"

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
type: "input_image"

The type of the input item. Always input_image.

file_id: optional string

The ID of the file to be sent to the model.

image_url: optional string

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ComputerScreenshotContent = object { detail, file_id, image_url, type }

A screenshot of a computer.

detail: "low" or "high" or "auto" or "original"

The detail level of the screenshot image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
file_id: string

The identifier of an uploaded file that contains the screenshot.

image_url: string

The URL of the screenshot image.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

ResponseInputFile = object { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data: optional string

The content of the file to be sent to the model.

file_id: optional string

The ID of the file to be sent to the model.

file_url: optional string

The URL of the file to be sent to the model.

filename: optional string

The name of the file to be sent to the model.

role: "unknown" or "user" or "assistant" or 5 more

The role of the message. One of unknown, user, assistant, system, critic, discriminator, developer, or tool.

One of the following:
"unknown"
"user"
"assistant"
"system"
"critic"
"discriminator"
"developer"
"tool"
status: "in_progress" or "completed" or "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the message. Always set to message.

SummaryTextContent = object { text, type }

A summary text from the model.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

TextContent = object { text, type }

A text content.

text: string
type: "text"

ConversationsItems

Manage conversations and conversation items.

Create items
POST/conversations/{conversation_id}/items
List items
GET/conversations/{conversation_id}/items
Retrieve an item
GET/conversations/{conversation_id}/items/{item_id}
Delete an item
DELETE/conversations/{conversation_id}/items/{item_id}
ModelsExpand Collapse
ConversationItem = Message { id, content, role, 2 more } or object { id, arguments, call_id, 5 more } or object { id, call_id, output, 3 more } or 22 more

A single item within a conversation. The set of possible types are the same as the output type of a Response object.

One of the following:
Message = object { id, content, role, 2 more }

A message to or from the model.

id: string

The unique ID of the message.

content: array of ResponseInputText { text, type } or ResponseOutputText { annotations, logprobs, text, type } or TextContent { text, type } or 6 more

The content of the message

One of the following:
ResponseInputText = object { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseOutputText = object { annotations, logprobs, text, type }

A text output from the model.

annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type }

The annotations of the text output.

One of the following:
FileCitation = object { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation = object { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation = object { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath = object { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

logprobs: array of object { token, bytes, logprob, top_logprobs }
token: string
bytes: array of number
logprob: number
top_logprobs: array of object { token, bytes, logprob }
token: string
bytes: array of number
logprob: number
text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

TextContent = object { text, type }

A text content.

text: string
type: "text"
SummaryTextContent = object { text, type }

A summary text from the model.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

ReasoningText = object { text, type }

Reasoning text from the model.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

ResponseOutputRefusal = object { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

ResponseInputImage = object { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" or "high" or "auto" or "original"

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
type: "input_image"

The type of the input item. Always input_image.

file_id: optional string

The ID of the file to be sent to the model.

image_url: optional string

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ComputerScreenshotContent = object { detail, file_id, image_url, type }

A screenshot of a computer.

detail: "low" or "high" or "auto" or "original"

The detail level of the screenshot image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
file_id: string

The identifier of an uploaded file that contains the screenshot.

image_url: string

The URL of the screenshot image.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

ResponseInputFile = object { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data: optional string

The content of the file to be sent to the model.

file_id: optional string

The ID of the file to be sent to the model.

file_url: optional string

The URL of the file to be sent to the model.

filename: optional string

The name of the file to be sent to the model.

role: "unknown" or "user" or "assistant" or 5 more

The role of the message. One of unknown, user, assistant, system, critic, discriminator, developer, or tool.

One of the following:
"unknown"
"user"
"assistant"
"system"
"critic"
"discriminator"
"developer"
"tool"
status: "in_progress" or "completed" or "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the message. Always set to message.

FunctionCall = object { id, arguments, call_id, 5 more }
id: string

The unique ID of the function tool call.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

status: "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "function_call"

The type of the function tool call. Always function_call.

created_by: optional string

The identifier of the actor that created the item.

namespace: optional string

The namespace of the function to run.

FunctionCallOutput = object { id, call_id, output, 3 more }
id: string

The unique ID of the function call tool output.

call_id: string

The unique ID of the function tool call generated by the model.

output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

The output from the function call generated by your code. Can be a string or an list of output content.

One of the following:
StringOutput = string

A string of the output of the function call.

OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

Text, image, or file output of the function call.

One of the following:
ResponseInputText = object { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage = object { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" or "high" or "auto" or "original"

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
type: "input_image"

The type of the input item. Always input_image.

file_id: optional string

The ID of the file to be sent to the model.

image_url: optional string

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile = object { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data: optional string

The content of the file to be sent to the model.

file_id: optional string

The ID of the file to be sent to the model.

file_url: optional string

The URL of the file to be sent to the model.

filename: optional string

The name of the file to be sent to the model.

status: "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "function_call_output"

The type of the function tool call output. Always function_call_output.

created_by: optional string

The identifier of the actor that created the item.

FileSearchCall = object { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: array of string

The queries used to search for files.

status: "in_progress" or "searching" or "completed" or 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

One of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results: optional array of object { attributes, file_id, filename, 2 more }

The results of the file search tool call.

attributes: optional map[string or number or boolean]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

One of the following:
string
number
boolean
file_id: optional string

The unique ID of the file.

filename: optional string

The name of the file.

score: optional number

The relevance score of the file - a value between 0 and 1.

formatfloat
text: optional string

The text that was retrieved from the file.

WebSearchCall = object { id, action, status, type }

The results of a web search tool call. See the web search guide for more information.

id: string

The unique ID of the web search tool call.

action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url }

An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).

One of the following:
Search = object { query, type, queries, sources }

Action type "search" - Performs a web search query.

query: string

[DEPRECATED] The search query.

type: "search"

The action type.

queries: optional array of string

The search queries.

sources: optional array of object { type, url }

The sources used in the search.

type: "url"

The type of source. Always url.

url: string

The URL of the source.

OpenPage = object { type, url }

Action type "open_page" - Opens a specific URL from search results.

type: "open_page"

The action type.

url: optional string

The URL opened by the model.

formaturi
FindInPage = object { pattern, type, url }

Action type "find_in_page": Searches for a pattern within a loaded page.

pattern: string

The pattern or text to search for within the page.

type: "find_in_page"

The action type.

url: string

The URL of the page searched for the pattern.

formaturi
status: "in_progress" or "searching" or "completed" or "failed"

The status of the web search tool call.

One of the following:
"in_progress"
"searching"
"completed"
"failed"
type: "web_search_call"

The type of the web search tool call. Always web_search_call.

ImageGenerationCall = object { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string

The generated image encoded in base64.

status: "in_progress" or "completed" or "generating" or "failed"

The status of the image generation call.

One of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ComputerCall = object { id, call_id, pending_safety_checks, 4 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: array of object { id, code, message }

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code: optional string

The type of the pending safety check.

message: optional string

Details about the pending safety check.

status: "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

action: optional ComputerAction

A click action.

actions: optional ComputerActionList { Click, DoubleClick, Drag, 6 more }

Flattened batched actions for computer_use. Each action includes an type discriminator and action-specific fields.

ComputerCallOutput = object { id, call_id, output, 4 more }
id: string

The unique ID of the computer call tool output.

call_id: string

The ID of the computer tool call that produced the output.

output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

status: "completed" or "incomplete" or "failed" or "in_progress"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

One of the following:
"completed"
"incomplete"
"failed"
"in_progress"
type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

acknowledged_safety_checks: optional array of object { id, code, message }

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code: optional string

The type of the pending safety check.

message: optional string

Details about the pending safety check.

created_by: optional string

The identifier of the actor that created the item.

ToolSearchCall = object { id, arguments, call_id, 4 more }
id: string

The unique ID of the tool search call item.

arguments: unknown

Arguments used for the tool search call.

call_id: string

The unique ID of the tool search call generated by the model.

execution: "server" or "client"

Whether tool search was executed by the server or by the client.

One of the following:
"server"
"client"
status: "in_progress" or "completed" or "incomplete"

The status of the tool search call item that was recorded.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "tool_search_call"

The type of the item. Always tool_search_call.

created_by: optional string

The identifier of the actor that created the item.

ToolSearchOutput = object { id, call_id, execution, 4 more }
id: string

The unique ID of the tool search output item.

call_id: string

The unique ID of the tool search call generated by the model.

execution: "server" or "client"

Whether tool search was executed by the server or by the client.

One of the following:
"server"
"client"
status: "in_progress" or "completed" or "incomplete"

The status of the tool search output item that was recorded.

One of the following:
"in_progress"
"completed"
"incomplete"
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 more

The loaded tool definitions returned by tool search.

One of the following:
Function = object { name, parameters, strict, 3 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: map[unknown]

A JSON schema object describing the parameters of the function.

strict: boolean

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

defer_loading: optional boolean

Whether this function is deferred and loaded via tool search.

description: optional string

A description of the function. Used by the model to determine whether or not to call the function.

FileSearch = object { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: array of string

The IDs of the vector stores to search.

filters: optional ComparisonFilter { key, type, value } or CompoundFilter { filters, type }

A filter to apply.

One of the following:
ComparisonFilter = object { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" or "ne" or "gt" or 5 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
One of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
"in"
"nin"
value: string or number or boolean or array of string or number

The value to compare against the attribute key; supports string, number, or boolean types.

One of the following:
string
number
boolean
array of string or number
One of the following:
string
number
CompoundFilter = object { filters, type }

Combine multiple filters using and or or.

filters: array of ComparisonFilter { key, type, value } or unknown

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

One of the following:
ComparisonFilter = object { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" or "ne" or "gt" or 5 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
One of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
"in"
"nin"
value: string or number or boolean or array of string or number

The value to compare against the attribute key; supports string, number, or boolean types.

One of the following:
string
number
boolean
array of string or number
One of the following:
string
number
unknown
type: "and" or "or"

Type of operation: and or or.

One of the following:
"and"
"or"
max_num_results: optional number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: optional object { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker: optional "auto" or "default-2024-11-15"

The ranker to use for the file search.

One of the following:
"auto"
"default-2024-11-15"
score_threshold: optional number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

Computer = object { type }

A tool that controls a virtual computer. Learn more about the computer tool.

type: "computer"

The type of the computer tool. Always computer.

ComputerUsePreview = object { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" or "mac" or "linux" or 2 more

The type of computer environment to control.

One of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearch = object { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" or "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

One of the following:
"web_search"
"web_search_2025_08_26"
filters: optional object { allowed_domains }

Filters for the search.

allowed_domains: optional array of string

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size: optional "low" or "medium" or "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

One of the following:
"low"
"medium"
"high"
user_location: optional object { city, country, region, 2 more }

The approximate location of the user.

city: optional string

Free text input for the city of the user, e.g. San Francisco.

country: optional string

The two-letter ISO country code of the user, e.g. US.

region: optional string

Free text input for the region of the user, e.g. California.

timezone: optional string

The IANA timezone of the user, e.g. America/Los_Angeles.

type: optional "approximate"

The type of location approximation. Always approximate.

Mcp = object { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools: optional array of string or object { read_only, tool_names }

List of allowed tool names or a filter object.

One of the following:
McpAllowedTools = array of string

A string array of allowed tool names

McpToolFilter = object { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only: optional boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: optional array of string

List of allowed tool names.

authorization: optional string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
One of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
defer_loading: optional boolean

Whether this MCP tool is deferred and discovered via tool search.

headers: optional map[string]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: optional object { always, never } or "always" or "never"

Specify which of the MCP server's tools require approval.

One of the following:
McpToolApprovalFilter = object { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always: optional object { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only: optional boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: optional array of string

List of allowed tool names.

never: optional object { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only: optional boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: optional array of string

List of allowed tool names.

McpToolApprovalSetting = "always" or "never"

Specify a single approval policy for all tools. One of always or never. When set to always, all tools will require approval. When set to never, all tools will not require approval.

One of the following:
"always"
"never"
server_description: optional string

Optional description of the MCP server, used to provide more context.

server_url: optional string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter = object { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string or object { type, file_ids, memory_limit, network_policy }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

One of the following:
string

The container ID.

CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids: optional array of string

An optional list of uploaded files to make available to your code.

memory_limit: optional "1g" or "4g" or "16g" or "64g"

The memory limit for the code interpreter container.

One of the following:
"1g"
"4g"
"16g"
"64g"
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets }

Network access policy for the container.

One of the following:
ContainerNetworkPolicyDisabled = object { type }
type: "disabled"

Disable outbound network access. Always disabled.

ContainerNetworkPolicyAllowlist = object { allowed_domains, type, domain_secrets }
allowed_domains: array of string

A list of allowed domains when type is allowlist.

type: "allowlist"

Allow outbound network access only to specified domains. Always allowlist.

domain_secrets: optional array of ContainerNetworkPolicyDomainSecret { domain, name, value }

Optional domain-scoped secrets for allowlisted domains.

domain: string

The domain associated with the secret.

minLength1
name: string

The name of the secret to inject for the domain.

minLength1
value: string

The secret value to inject for the domain.

maxLength10485760
minLength1
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration = object { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action: optional "generate" or "edit" or "auto"

Whether to generate a new image or edit an existing image. Default: auto.

One of the following:
"generate"
"edit"
"auto"
background: optional "transparent" or "opaque" or "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

One of the following:
"transparent"
"opaque"
"auto"
input_fidelity: optional "high" or "low"

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

One of the following:
"high"
"low"
input_image_mask: optional object { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: optional string

File ID for the mask image.

image_url: optional string

Base64-encoded mask image.

model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

One of the following:
string
"gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

One of the following:
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation: optional "auto" or "low"

Moderation level for the generated image. Default: auto.

One of the following:
"auto"
"low"
output_compression: optional number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format: optional "png" or "webp" or "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

One of the following:
"png"
"webp"
"jpeg"
partial_images: optional number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality: optional "low" or "medium" or "high" or "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

One of the following:
"low"
"medium"
"high"
"auto"
size: optional "1024x1024" or "1024x1536" or "1536x1024" or "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

One of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell = object { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

Shell = object { type, environment }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
One of the following:
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
type: "container_auto"

Automatically creates a container for this request

file_ids: optional array of string

An optional list of uploaded files to make available to your code.

memory_limit: optional "1g" or "4g" or "16g" or "64g"

The memory limit for the container.

One of the following:
"1g"
"4g"
"16g"
"64g"
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets }

Network access policy for the container.

One of the following:
ContainerNetworkPolicyDisabled = object { type }
type: "disabled"

Disable outbound network access. Always disabled.

ContainerNetworkPolicyAllowlist = object { allowed_domains, type, domain_secrets }
allowed_domains: array of string

A list of allowed domains when type is allowlist.

type: "allowlist"

Allow outbound network access only to specified domains. Always allowlist.

domain_secrets: optional array of ContainerNetworkPolicyDomainSecret { domain, name, value }

Optional domain-scoped secrets for allowlisted domains.

domain: string

The domain associated with the secret.

minLength1
name: string

The name of the secret to inject for the domain.

minLength1
value: string

The secret value to inject for the domain.

maxLength10485760
minLength1
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type }

An optional list of skills referenced by id or inline data.

One of the following:
SkillReference = object { skill_id, type, version }
skill_id: string

The ID of the referenced skill.

maxLength64
minLength1
type: "skill_reference"

References a skill created with the /v1/skills endpoint.

version: optional string

Optional skill version. Use a positive integer or 'latest'. Omit for default.

InlineSkill = object { description, name, source, type }
description: string

The description of the skill.

name: string

The name of the skill.

source: InlineSkillSource { data, media_type, type }

Inline skill payload

type: "inline"

Defines an inline skill for this request.

LocalEnvironment = object { type, skills }
type: "local"

Use a local computer environment.

skills: optional array of LocalSkill { description, name, path }

An optional list of skills.

description: string

The description of the skill.

name: string

The name of the skill.

path: string

The path to the directory containing the skill.

ContainerReference = object { container_id, type }
container_id: string

The ID of the referenced container.

type: "container_reference"

References a container created with the /v1/containers endpoint

Custom = object { name, type, defer_loading, 2 more }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

defer_loading: optional boolean

Whether this tool should be deferred and discovered via tool search.

description: optional string

Optional description of the custom tool, used to provide more context.

format: optional CustomToolInputFormat

The input format for the custom tool. Default is unconstrained text.

Namespace = object { description, name, tools, type }

Groups function/custom tools under a shared namespace.

description: string

A description of the namespace shown to the model.

minLength1
name: string

The namespace name used in tool calls (for example, crm).

minLength1
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more }

The function/custom tools available inside this namespace.

One of the following:
Function = object { name, type, defer_loading, 3 more }
name: string
maxLength128
minLength1
type: "function"
defer_loading: optional boolean

Whether this function should be deferred and discovered via tool search.

description: optional string
parameters: optional unknown
strict: optional boolean
Custom = object { name, type, defer_loading, 2 more }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

defer_loading: optional boolean

Whether this tool should be deferred and discovered via tool search.

description: optional string

Optional description of the custom tool, used to provide more context.

format: optional CustomToolInputFormat

The input format for the custom tool. Default is unconstrained text.

type: "namespace"

The type of the tool. Always namespace.

ToolSearch = object { type, description, execution, parameters }

Hosted or BYOT tool search configuration for deferred tools.

type: "tool_search"

The type of the tool. Always tool_search.

description: optional string

Description shown to the model for a client-executed tool search tool.

execution: optional "server" or "client"

Whether tool search is executed by the server or by the client.

One of the following:
"server"
"client"
parameters: optional unknown

Parameter schema for a client-executed tool search tool.

WebSearchPreview = object { type, search_content_types, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" or "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

One of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_content_types: optional array of "text" or "image"
One of the following:
"text"
"image"
search_context_size: optional "low" or "medium" or "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

One of the following:
"low"
"medium"
"high"
user_location: optional object { type, city, country, 2 more }

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city: optional string

Free text input for the city of the user, e.g. San Francisco.

country: optional string

The two-letter ISO country code of the user, e.g. US.

region: optional string

Free text input for the region of the user, e.g. California.

timezone: optional string

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatch = object { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

type: "tool_search_output"

The type of the item. Always tool_search_output.

created_by: optional string

The identifier of the actor that created the item.

Reasoning = object { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: array of SummaryTextContent { text, type }

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content: optional array of object { text, type }

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content: optional string

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status: optional "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
Compaction = object { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by: optional string

The identifier of the actor that created the item.

CodeInterpreterCall = object { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: array of object { logs, type } or object { type, url }

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

One of the following:
Logs = object { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image = object { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" or "completed" or "incomplete" or 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

One of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall = object { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: object { command, env, type, 3 more }

Execute a shell command on the server.

command: array of string

The command to run.

env: map[string]

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms: optional number

Optional timeout in milliseconds for the command.

user: optional string

Optional user to run the command as.

working_directory: optional string

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" or "completed" or "incomplete"

The status of the local shell call.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput = object { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status: optional "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete.

One of the following:
"in_progress"
"completed"
"incomplete"
ShellCall = object { id, action, call_id, 4 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: object { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: array of string
max_output_length: number

Optional maximum number of characters to return from each command.

timeout_ms: number

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

environment: ResponseLocalEnvironment { type } or ResponseContainerReference { container_id, type }

Represents the use of a local environment to perform shell actions.

One of the following:
ResponseLocalEnvironment = object { type }

Represents the use of a local environment to perform shell actions.

type: "local"

The environment type. Always local.

ResponseContainerReference = object { container_id, type }

Represents a container created with /v1/containers.

container_id: string
type: "container_reference"

The environment type. Always container_reference.

status: "in_progress" or "completed" or "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by: optional string

The ID of the entity that created this tool call.

ShellCallOutput = object { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: array of object { outcome, stderr, stdout, created_by }

An array of shell call output contents

outcome: object { type } or object { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

One of the following:
Timeout = object { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit = object { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by: optional string

The identifier of the actor that created the item.

status: "in_progress" or "completed" or "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by: optional string

The identifier of the actor that created the item.

ApplyPatchCall = object { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: object { diff, path, type } or object { path, type } or object { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

One of the following:
CreateFile = object { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile = object { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile = object { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" or "completed"

The status of the apply patch tool call. One of in_progress or completed.

One of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by: optional string

The ID of the entity that created this tool call.

ApplyPatchCallOutput = object { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" or "failed"

The status of the apply patch tool call output. One of completed or failed.

One of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by: optional string

The ID of the entity that created this tool call output.

output: optional string

Optional textual output returned by the apply patch tool.

McpListTools = object { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: array of object { input_schema, name, annotations, description }

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations: optional unknown

Additional annotations about the tool.

description: optional string

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error: optional string

Error message if the server could not list tools.

McpApprovalRequest = object { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse = object { id, approval_request_id, approve, 2 more }

A response to an MCP approval request.

id: string

The unique ID of the approval response

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

reason: optional string

Optional reason for the decision.

McpCall = object { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id: optional string

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error: optional string

The error from the tool call, if any.

output: optional string

The output from the tool call.

status: optional "in_progress" or "completed" or "incomplete" or 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

One of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
CustomToolCall = object { call_id, input, name, 3 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id: optional string

The unique ID of the custom tool call in the OpenAI platform.

namespace: optional string

The namespace of the custom tool being called.

CustomToolCallOutput = object { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

The output from the custom tool call generated by your code. Can be a string or an list of output content.

One of the following:
StringOutput = string

A string of the output of the custom tool call.

OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

Text, image, or file output of the custom tool call.

One of the following:
ResponseInputText = object { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage = object { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" or "high" or "auto" or "original"

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
type: "input_image"

The type of the input item. Always input_image.

file_id: optional string

The ID of the file to be sent to the model.

image_url: optional string

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile = object { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data: optional string

The content of the file to be sent to the model.

file_id: optional string

The ID of the file to be sent to the model.

file_url: optional string

The URL of the file to be sent to the model.

filename: optional string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id: optional string

The unique ID of the custom tool call output in the OpenAI platform.

ConversationItemList = object { data, first_id, has_more, 2 more }

A list of Conversation items.

data: array of ConversationItem

A list of conversation items.

One of the following:
Message = object { id, content, role, 2 more }

A message to or from the model.

id: string

The unique ID of the message.

content: array of ResponseInputText { text, type } or ResponseOutputText { annotations, logprobs, text, type } or TextContent { text, type } or 6 more

The content of the message

One of the following:
ResponseInputText = object { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseOutputText = object { annotations, logprobs, text, type }

A text output from the model.

annotations: array of object { file_id, filename, index, type } or object { end_index, start_index, title, 2 more } or object { container_id, end_index, file_id, 3 more } or object { file_id, index, type }

The annotations of the text output.

One of the following:
FileCitation = object { file_id, filename, index, type }

A citation to a file.

file_id: string

The ID of the file.

filename: string

The filename of the file cited.

index: number

The index of the file in the list of files.

type: "file_citation"

The type of the file citation. Always file_citation.

URLCitation = object { end_index, start_index, title, 2 more }

A citation for a web resource used to generate a model response.

end_index: number

The index of the last character of the URL citation in the message.

start_index: number

The index of the first character of the URL citation in the message.

title: string

The title of the web resource.

type: "url_citation"

The type of the URL citation. Always url_citation.

url: string

The URL of the web resource.

ContainerFileCitation = object { container_id, end_index, file_id, 3 more }

A citation for a container file used to generate a model response.

container_id: string

The ID of the container file.

end_index: number

The index of the last character of the container file citation in the message.

file_id: string

The ID of the file.

filename: string

The filename of the container file cited.

start_index: number

The index of the first character of the container file citation in the message.

type: "container_file_citation"

The type of the container file citation. Always container_file_citation.

FilePath = object { file_id, index, type }

A path to a file.

file_id: string

The ID of the file.

index: number

The index of the file in the list of files.

type: "file_path"

The type of the file path. Always file_path.

logprobs: array of object { token, bytes, logprob, top_logprobs }
token: string
bytes: array of number
logprob: number
top_logprobs: array of object { token, bytes, logprob }
token: string
bytes: array of number
logprob: number
text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

TextContent = object { text, type }

A text content.

text: string
type: "text"
SummaryTextContent = object { text, type }

A summary text from the model.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

ReasoningText = object { text, type }

Reasoning text from the model.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

ResponseOutputRefusal = object { refusal, type }

A refusal from the model.

refusal: string

The refusal explanation from the model.

type: "refusal"

The type of the refusal. Always refusal.

ResponseInputImage = object { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" or "high" or "auto" or "original"

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
type: "input_image"

The type of the input item. Always input_image.

file_id: optional string

The ID of the file to be sent to the model.

image_url: optional string

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ComputerScreenshotContent = object { detail, file_id, image_url, type }

A screenshot of a computer.

detail: "low" or "high" or "auto" or "original"

The detail level of the screenshot image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
file_id: string

The identifier of an uploaded file that contains the screenshot.

image_url: string

The URL of the screenshot image.

type: "computer_screenshot"

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

ResponseInputFile = object { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data: optional string

The content of the file to be sent to the model.

file_id: optional string

The ID of the file to be sent to the model.

file_url: optional string

The URL of the file to be sent to the model.

filename: optional string

The name of the file to be sent to the model.

role: "unknown" or "user" or "assistant" or 5 more

The role of the message. One of unknown, user, assistant, system, critic, discriminator, developer, or tool.

One of the following:
"unknown"
"user"
"assistant"
"system"
"critic"
"discriminator"
"developer"
"tool"
status: "in_progress" or "completed" or "incomplete"

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "message"

The type of the message. Always set to message.

FunctionCall = object { id, arguments, call_id, 5 more }
id: string

The unique ID of the function tool call.

arguments: string

A JSON string of the arguments to pass to the function.

call_id: string

The unique ID of the function tool call generated by the model.

name: string

The name of the function to run.

status: "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "function_call"

The type of the function tool call. Always function_call.

created_by: optional string

The identifier of the actor that created the item.

namespace: optional string

The namespace of the function to run.

FunctionCallOutput = object { id, call_id, output, 3 more }
id: string

The unique ID of the function call tool output.

call_id: string

The unique ID of the function tool call generated by the model.

output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

The output from the function call generated by your code. Can be a string or an list of output content.

One of the following:
StringOutput = string

A string of the output of the function call.

OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

Text, image, or file output of the function call.

One of the following:
ResponseInputText = object { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage = object { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" or "high" or "auto" or "original"

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
type: "input_image"

The type of the input item. Always input_image.

file_id: optional string

The ID of the file to be sent to the model.

image_url: optional string

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile = object { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data: optional string

The content of the file to be sent to the model.

file_id: optional string

The ID of the file to be sent to the model.

file_url: optional string

The URL of the file to be sent to the model.

filename: optional string

The name of the file to be sent to the model.

status: "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "function_call_output"

The type of the function tool call output. Always function_call_output.

created_by: optional string

The identifier of the actor that created the item.

FileSearchCall = object { id, queries, status, 2 more }

The results of a file search tool call. See the file search guide for more information.

id: string

The unique ID of the file search tool call.

queries: array of string

The queries used to search for files.

status: "in_progress" or "searching" or "completed" or 2 more

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

One of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: "file_search_call"

The type of the file search tool call. Always file_search_call.

results: optional array of object { attributes, file_id, filename, 2 more }

The results of the file search tool call.

attributes: optional map[string or number or boolean]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

One of the following:
string
number
boolean
file_id: optional string

The unique ID of the file.

filename: optional string

The name of the file.

score: optional number

The relevance score of the file - a value between 0 and 1.

formatfloat
text: optional string

The text that was retrieved from the file.

WebSearchCall = object { id, action, status, type }

The results of a web search tool call. See the web search guide for more information.

id: string

The unique ID of the web search tool call.

action: object { query, type, queries, sources } or object { type, url } or object { pattern, type, url }

An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).

One of the following:
Search = object { query, type, queries, sources }

Action type "search" - Performs a web search query.

query: string

[DEPRECATED] The search query.

type: "search"

The action type.

queries: optional array of string

The search queries.

sources: optional array of object { type, url }

The sources used in the search.

type: "url"

The type of source. Always url.

url: string

The URL of the source.

OpenPage = object { type, url }

Action type "open_page" - Opens a specific URL from search results.

type: "open_page"

The action type.

url: optional string

The URL opened by the model.

formaturi
FindInPage = object { pattern, type, url }

Action type "find_in_page": Searches for a pattern within a loaded page.

pattern: string

The pattern or text to search for within the page.

type: "find_in_page"

The action type.

url: string

The URL of the page searched for the pattern.

formaturi
status: "in_progress" or "searching" or "completed" or "failed"

The status of the web search tool call.

One of the following:
"in_progress"
"searching"
"completed"
"failed"
type: "web_search_call"

The type of the web search tool call. Always web_search_call.

ImageGenerationCall = object { id, result, status, type }

An image generation request made by the model.

id: string

The unique ID of the image generation call.

result: string

The generated image encoded in base64.

status: "in_progress" or "completed" or "generating" or "failed"

The status of the image generation call.

One of the following:
"in_progress"
"completed"
"generating"
"failed"
type: "image_generation_call"

The type of the image generation call. Always image_generation_call.

ComputerCall = object { id, call_id, pending_safety_checks, 4 more }

A tool call to a computer use tool. See the computer use guide for more information.

id: string

The unique ID of the computer call.

call_id: string

An identifier used when responding to the tool call with output.

pending_safety_checks: array of object { id, code, message }

The pending safety checks for the computer call.

id: string

The ID of the pending safety check.

code: optional string

The type of the pending safety check.

message: optional string

Details about the pending safety check.

status: "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "computer_call"

The type of the computer call. Always computer_call.

action: optional ComputerAction

A click action.

actions: optional ComputerActionList { Click, DoubleClick, Drag, 6 more }

Flattened batched actions for computer_use. Each action includes an type discriminator and action-specific fields.

ComputerCallOutput = object { id, call_id, output, 4 more }
id: string

The unique ID of the computer call tool output.

call_id: string

The ID of the computer tool call that produced the output.

output: ResponseComputerToolCallOutputScreenshot { type, file_id, image_url }

A computer screenshot image used with the computer use tool.

status: "completed" or "incomplete" or "failed" or "in_progress"

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

One of the following:
"completed"
"incomplete"
"failed"
"in_progress"
type: "computer_call_output"

The type of the computer tool call output. Always computer_call_output.

acknowledged_safety_checks: optional array of object { id, code, message }

The safety checks reported by the API that have been acknowledged by the developer.

id: string

The ID of the pending safety check.

code: optional string

The type of the pending safety check.

message: optional string

Details about the pending safety check.

created_by: optional string

The identifier of the actor that created the item.

ToolSearchCall = object { id, arguments, call_id, 4 more }
id: string

The unique ID of the tool search call item.

arguments: unknown

Arguments used for the tool search call.

call_id: string

The unique ID of the tool search call generated by the model.

execution: "server" or "client"

Whether tool search was executed by the server or by the client.

One of the following:
"server"
"client"
status: "in_progress" or "completed" or "incomplete"

The status of the tool search call item that was recorded.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "tool_search_call"

The type of the item. Always tool_search_call.

created_by: optional string

The identifier of the actor that created the item.

ToolSearchOutput = object { id, call_id, execution, 4 more }
id: string

The unique ID of the tool search output item.

call_id: string

The unique ID of the tool search call generated by the model.

execution: "server" or "client"

Whether tool search was executed by the server or by the client.

One of the following:
"server"
"client"
status: "in_progress" or "completed" or "incomplete"

The status of the tool search output item that was recorded.

One of the following:
"in_progress"
"completed"
"incomplete"
tools: array of object { name, parameters, strict, 3 more } or object { type, vector_store_ids, filters, 2 more } or object { type } or 12 more

The loaded tool definitions returned by tool search.

One of the following:
Function = object { name, parameters, strict, 3 more }

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: string

The name of the function to call.

parameters: map[unknown]

A JSON schema object describing the parameters of the function.

strict: boolean

Whether to enforce strict parameter validation. Default true.

type: "function"

The type of the function tool. Always function.

defer_loading: optional boolean

Whether this function is deferred and loaded via tool search.

description: optional string

A description of the function. Used by the model to determine whether or not to call the function.

FileSearch = object { type, vector_store_ids, filters, 2 more }

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: "file_search"

The type of the file search tool. Always file_search.

vector_store_ids: array of string

The IDs of the vector stores to search.

filters: optional ComparisonFilter { key, type, value } or CompoundFilter { filters, type }

A filter to apply.

One of the following:
ComparisonFilter = object { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" or "ne" or "gt" or 5 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
One of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
"in"
"nin"
value: string or number or boolean or array of string or number

The value to compare against the attribute key; supports string, number, or boolean types.

One of the following:
string
number
boolean
array of string or number
One of the following:
string
number
CompoundFilter = object { filters, type }

Combine multiple filters using and or or.

filters: array of ComparisonFilter { key, type, value } or unknown

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

One of the following:
ComparisonFilter = object { key, type, value }

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: string

The key to compare against the value.

type: "eq" or "ne" or "gt" or 5 more

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
One of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
"in"
"nin"
value: string or number or boolean or array of string or number

The value to compare against the attribute key; supports string, number, or boolean types.

One of the following:
string
number
boolean
array of string or number
One of the following:
string
number
unknown
type: "and" or "or"

Type of operation: and or or.

One of the following:
"and"
"or"
max_num_results: optional number

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: optional object { hybrid_search, ranker, score_threshold }

Ranking options for search.

ranker: optional "auto" or "default-2024-11-15"

The ranker to use for the file search.

One of the following:
"auto"
"default-2024-11-15"
score_threshold: optional number

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

Computer = object { type }

A tool that controls a virtual computer. Learn more about the computer tool.

type: "computer"

The type of the computer tool. Always computer.

ComputerUsePreview = object { display_height, display_width, environment, type }

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: number

The height of the computer display.

display_width: number

The width of the computer display.

environment: "windows" or "mac" or "linux" or 2 more

The type of computer environment to control.

One of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: "computer_use_preview"

The type of the computer use tool. Always computer_use_preview.

WebSearch = object { type, filters, search_context_size, user_location }

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: "web_search" or "web_search_2025_08_26"

The type of the web search tool. One of web_search or web_search_2025_08_26.

One of the following:
"web_search"
"web_search_2025_08_26"
filters: optional object { allowed_domains }

Filters for the search.

allowed_domains: optional array of string

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size: optional "low" or "medium" or "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

One of the following:
"low"
"medium"
"high"
user_location: optional object { city, country, region, 2 more }

The approximate location of the user.

city: optional string

Free text input for the city of the user, e.g. San Francisco.

country: optional string

The two-letter ISO country code of the user, e.g. US.

region: optional string

Free text input for the region of the user, e.g. California.

timezone: optional string

The IANA timezone of the user, e.g. America/Los_Angeles.

type: optional "approximate"

The type of location approximation. Always approximate.

Mcp = object { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools: optional array of string or object { read_only, tool_names }

List of allowed tool names or a filter object.

One of the following:
McpAllowedTools = array of string

A string array of allowed tool names

McpToolFilter = object { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only: optional boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: optional array of string

List of allowed tool names.

authorization: optional string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
One of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
defer_loading: optional boolean

Whether this MCP tool is deferred and discovered via tool search.

headers: optional map[string]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: optional object { always, never } or "always" or "never"

Specify which of the MCP server's tools require approval.

One of the following:
McpToolApprovalFilter = object { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always: optional object { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only: optional boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: optional array of string

List of allowed tool names.

never: optional object { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only: optional boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: optional array of string

List of allowed tool names.

McpToolApprovalSetting = "always" or "never"

Specify a single approval policy for all tools. One of always or never. When set to always, all tools will require approval. When set to never, all tools will not require approval.

One of the following:
"always"
"never"
server_description: optional string

Optional description of the MCP server, used to provide more context.

server_url: optional string

The URL for the MCP server. One of server_url or connector_id must be provided.

CodeInterpreter = object { container, type }

A tool that runs Python code to help generate a response to a prompt.

container: string or object { type, file_ids, memory_limit, network_policy }

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

One of the following:
string

The container ID.

CodeInterpreterToolAuto = object { type, file_ids, memory_limit, network_policy }

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: "auto"

Always auto.

file_ids: optional array of string

An optional list of uploaded files to make available to your code.

memory_limit: optional "1g" or "4g" or "16g" or "64g"

The memory limit for the code interpreter container.

One of the following:
"1g"
"4g"
"16g"
"64g"
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets }

Network access policy for the container.

One of the following:
ContainerNetworkPolicyDisabled = object { type }
type: "disabled"

Disable outbound network access. Always disabled.

ContainerNetworkPolicyAllowlist = object { allowed_domains, type, domain_secrets }
allowed_domains: array of string

A list of allowed domains when type is allowlist.

type: "allowlist"

Allow outbound network access only to specified domains. Always allowlist.

domain_secrets: optional array of ContainerNetworkPolicyDomainSecret { domain, name, value }

Optional domain-scoped secrets for allowlisted domains.

domain: string

The domain associated with the secret.

minLength1
name: string

The name of the secret to inject for the domain.

minLength1
value: string

The secret value to inject for the domain.

maxLength10485760
minLength1
type: "code_interpreter"

The type of the code interpreter tool. Always code_interpreter.

ImageGeneration = object { type, action, background, 9 more }

A tool that generates images using the GPT image models.

type: "image_generation"

The type of the image generation tool. Always image_generation.

action: optional "generate" or "edit" or "auto"

Whether to generate a new image or edit an existing image. Default: auto.

One of the following:
"generate"
"edit"
"auto"
background: optional "transparent" or "opaque" or "auto"

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

One of the following:
"transparent"
"opaque"
"auto"
input_fidelity: optional "high" or "low"

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

One of the following:
"high"
"low"
input_image_mask: optional object { file_id, image_url }

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: optional string

File ID for the mask image.

image_url: optional string

Base64-encoded mask image.

model: optional string or "gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

One of the following:
string
"gpt-image-1" or "gpt-image-1-mini" or "gpt-image-1.5"

The image generation model to use. Default: gpt-image-1.

One of the following:
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation: optional "auto" or "low"

Moderation level for the generated image. Default: auto.

One of the following:
"auto"
"low"
output_compression: optional number

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format: optional "png" or "webp" or "jpeg"

The output format of the generated image. One of png, webp, or jpeg. Default: png.

One of the following:
"png"
"webp"
"jpeg"
partial_images: optional number

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality: optional "low" or "medium" or "high" or "auto"

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

One of the following:
"low"
"medium"
"high"
"auto"
size: optional "1024x1024" or "1024x1536" or "1536x1024" or "auto"

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

One of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
LocalShell = object { type }

A tool that allows the model to execute shell commands in a local environment.

type: "local_shell"

The type of the local shell tool. Always local_shell.

Shell = object { type, environment }

A tool that allows the model to execute shell commands.

type: "shell"

The type of the shell tool. Always shell.

environment: optional ContainerAuto { type, file_ids, memory_limit, 2 more } or LocalEnvironment { type, skills } or ContainerReference { container_id, type }
One of the following:
ContainerAuto = object { type, file_ids, memory_limit, 2 more }
type: "container_auto"

Automatically creates a container for this request

file_ids: optional array of string

An optional list of uploaded files to make available to your code.

memory_limit: optional "1g" or "4g" or "16g" or "64g"

The memory limit for the container.

One of the following:
"1g"
"4g"
"16g"
"64g"
network_policy: optional ContainerNetworkPolicyDisabled { type } or ContainerNetworkPolicyAllowlist { allowed_domains, type, domain_secrets }

Network access policy for the container.

One of the following:
ContainerNetworkPolicyDisabled = object { type }
type: "disabled"

Disable outbound network access. Always disabled.

ContainerNetworkPolicyAllowlist = object { allowed_domains, type, domain_secrets }
allowed_domains: array of string

A list of allowed domains when type is allowlist.

type: "allowlist"

Allow outbound network access only to specified domains. Always allowlist.

domain_secrets: optional array of ContainerNetworkPolicyDomainSecret { domain, name, value }

Optional domain-scoped secrets for allowlisted domains.

domain: string

The domain associated with the secret.

minLength1
name: string

The name of the secret to inject for the domain.

minLength1
value: string

The secret value to inject for the domain.

maxLength10485760
minLength1
skills: optional array of SkillReference { skill_id, type, version } or InlineSkill { description, name, source, type }

An optional list of skills referenced by id or inline data.

One of the following:
SkillReference = object { skill_id, type, version }
skill_id: string

The ID of the referenced skill.

maxLength64
minLength1
type: "skill_reference"

References a skill created with the /v1/skills endpoint.

version: optional string

Optional skill version. Use a positive integer or 'latest'. Omit for default.

InlineSkill = object { description, name, source, type }
description: string

The description of the skill.

name: string

The name of the skill.

source: InlineSkillSource { data, media_type, type }

Inline skill payload

type: "inline"

Defines an inline skill for this request.

LocalEnvironment = object { type, skills }
type: "local"

Use a local computer environment.

skills: optional array of LocalSkill { description, name, path }

An optional list of skills.

description: string

The description of the skill.

name: string

The name of the skill.

path: string

The path to the directory containing the skill.

ContainerReference = object { container_id, type }
container_id: string

The ID of the referenced container.

type: "container_reference"

References a container created with the /v1/containers endpoint

Custom = object { name, type, defer_loading, 2 more }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

defer_loading: optional boolean

Whether this tool should be deferred and discovered via tool search.

description: optional string

Optional description of the custom tool, used to provide more context.

format: optional CustomToolInputFormat

The input format for the custom tool. Default is unconstrained text.

Namespace = object { description, name, tools, type }

Groups function/custom tools under a shared namespace.

description: string

A description of the namespace shown to the model.

minLength1
name: string

The namespace name used in tool calls (for example, crm).

minLength1
tools: array of object { name, type, defer_loading, 3 more } or object { name, type, defer_loading, 2 more }

The function/custom tools available inside this namespace.

One of the following:
Function = object { name, type, defer_loading, 3 more }
name: string
maxLength128
minLength1
type: "function"
defer_loading: optional boolean

Whether this function should be deferred and discovered via tool search.

description: optional string
parameters: optional unknown
strict: optional boolean
Custom = object { name, type, defer_loading, 2 more }

A custom tool that processes input using a specified format. Learn more about custom tools

name: string

The name of the custom tool, used to identify it in tool calls.

type: "custom"

The type of the custom tool. Always custom.

defer_loading: optional boolean

Whether this tool should be deferred and discovered via tool search.

description: optional string

Optional description of the custom tool, used to provide more context.

format: optional CustomToolInputFormat

The input format for the custom tool. Default is unconstrained text.

type: "namespace"

The type of the tool. Always namespace.

ToolSearch = object { type, description, execution, parameters }

Hosted or BYOT tool search configuration for deferred tools.

type: "tool_search"

The type of the tool. Always tool_search.

description: optional string

Description shown to the model for a client-executed tool search tool.

execution: optional "server" or "client"

Whether tool search is executed by the server or by the client.

One of the following:
"server"
"client"
parameters: optional unknown

Parameter schema for a client-executed tool search tool.

WebSearchPreview = object { type, search_content_types, search_context_size, user_location }

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: "web_search_preview" or "web_search_preview_2025_03_11"

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

One of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_content_types: optional array of "text" or "image"
One of the following:
"text"
"image"
search_context_size: optional "low" or "medium" or "high"

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

One of the following:
"low"
"medium"
"high"
user_location: optional object { type, city, country, 2 more }

The user's location.

type: "approximate"

The type of location approximation. Always approximate.

city: optional string

Free text input for the city of the user, e.g. San Francisco.

country: optional string

The two-letter ISO country code of the user, e.g. US.

region: optional string

Free text input for the region of the user, e.g. California.

timezone: optional string

The IANA timezone of the user, e.g. America/Los_Angeles.

ApplyPatch = object { type }

Allows the assistant to create, delete, or update files using unified diffs.

type: "apply_patch"

The type of the tool. Always apply_patch.

type: "tool_search_output"

The type of the item. Always tool_search_output.

created_by: optional string

The identifier of the actor that created the item.

Reasoning = object { id, summary, type, 3 more }

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: string

The unique identifier of the reasoning content.

summary: array of SummaryTextContent { text, type }

Reasoning summary content.

text: string

A summary of the reasoning output from the model so far.

type: "summary_text"

The type of the object. Always summary_text.

type: "reasoning"

The type of the object. Always reasoning.

content: optional array of object { text, type }

Reasoning text content.

text: string

The reasoning text from the model.

type: "reasoning_text"

The type of the reasoning text. Always reasoning_text.

encrypted_content: optional string

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status: optional "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

One of the following:
"in_progress"
"completed"
"incomplete"
Compaction = object { id, encrypted_content, type, created_by }

A compaction item generated by the v1/responses/compact API.

id: string

The unique ID of the compaction item.

encrypted_content: string

The encrypted content that was produced by compaction.

type: "compaction"

The type of the item. Always compaction.

created_by: optional string

The identifier of the actor that created the item.

CodeInterpreterCall = object { id, code, container_id, 3 more }

A tool call to run code.

id: string

The unique ID of the code interpreter tool call.

code: string

The code to run, or null if not available.

container_id: string

The ID of the container used to run the code.

outputs: array of object { logs, type } or object { type, url }

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

One of the following:
Logs = object { logs, type }

The logs output from the code interpreter.

logs: string

The logs output from the code interpreter.

type: "logs"

The type of the output. Always logs.

Image = object { type, url }

The image output from the code interpreter.

type: "image"

The type of the output. Always image.

url: string

The URL of the image output from the code interpreter.

status: "in_progress" or "completed" or "incomplete" or 2 more

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

One of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: "code_interpreter_call"

The type of the code interpreter tool call. Always code_interpreter_call.

LocalShellCall = object { id, action, call_id, 2 more }

A tool call to run a command on the local shell.

id: string

The unique ID of the local shell call.

action: object { command, env, type, 3 more }

Execute a shell command on the server.

command: array of string

The command to run.

env: map[string]

Environment variables to set for the command.

type: "exec"

The type of the local shell action. Always exec.

timeout_ms: optional number

Optional timeout in milliseconds for the command.

user: optional string

Optional user to run the command as.

working_directory: optional string

Optional working directory to run the command in.

call_id: string

The unique ID of the local shell tool call generated by the model.

status: "in_progress" or "completed" or "incomplete"

The status of the local shell call.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "local_shell_call"

The type of the local shell call. Always local_shell_call.

LocalShellCallOutput = object { id, output, type, status }

The output of a local shell tool call.

id: string

The unique ID of the local shell tool call generated by the model.

output: string

A JSON string of the output of the local shell tool call.

type: "local_shell_call_output"

The type of the local shell tool call output. Always local_shell_call_output.

status: optional "in_progress" or "completed" or "incomplete"

The status of the item. One of in_progress, completed, or incomplete.

One of the following:
"in_progress"
"completed"
"incomplete"
ShellCall = object { id, action, call_id, 4 more }

A tool call that executes one or more shell commands in a managed environment.

id: string

The unique ID of the shell tool call. Populated when this item is returned via API.

action: object { commands, max_output_length, timeout_ms }

The shell commands and limits that describe how to run the tool call.

commands: array of string
max_output_length: number

Optional maximum number of characters to return from each command.

timeout_ms: number

Optional timeout in milliseconds for the commands.

call_id: string

The unique ID of the shell tool call generated by the model.

environment: ResponseLocalEnvironment { type } or ResponseContainerReference { container_id, type }

Represents the use of a local environment to perform shell actions.

One of the following:
ResponseLocalEnvironment = object { type }

Represents the use of a local environment to perform shell actions.

type: "local"

The environment type. Always local.

ResponseContainerReference = object { container_id, type }

Represents a container created with /v1/containers.

container_id: string
type: "container_reference"

The environment type. Always container_reference.

status: "in_progress" or "completed" or "incomplete"

The status of the shell call. One of in_progress, completed, or incomplete.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call"

The type of the item. Always shell_call.

created_by: optional string

The ID of the entity that created this tool call.

ShellCallOutput = object { id, call_id, max_output_length, 4 more }

The output of a shell tool call that was emitted.

id: string

The unique ID of the shell call output. Populated when this item is returned via API.

call_id: string

The unique ID of the shell tool call generated by the model.

max_output_length: number

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

output: array of object { outcome, stderr, stdout, created_by }

An array of shell call output contents

outcome: object { type } or object { exit_code, type }

Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.

One of the following:
Timeout = object { type }

Indicates that the shell call exceeded its configured time limit.

type: "timeout"

The outcome type. Always timeout.

Exit = object { exit_code, type }

Indicates that the shell commands finished and returned an exit code.

exit_code: number

Exit code from the shell process.

type: "exit"

The outcome type. Always exit.

stderr: string

The standard error output that was captured.

stdout: string

The standard output that was captured.

created_by: optional string

The identifier of the actor that created the item.

status: "in_progress" or "completed" or "incomplete"

The status of the shell call output. One of in_progress, completed, or incomplete.

One of the following:
"in_progress"
"completed"
"incomplete"
type: "shell_call_output"

The type of the shell call output. Always shell_call_output.

created_by: optional string

The identifier of the actor that created the item.

ApplyPatchCall = object { id, call_id, operation, 3 more }

A tool call that applies file diffs by creating, deleting, or updating files.

id: string

The unique ID of the apply patch tool call. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

operation: object { diff, path, type } or object { path, type } or object { diff, path, type }

One of the create_file, delete_file, or update_file operations applied via apply_patch.

One of the following:
CreateFile = object { diff, path, type }

Instruction describing how to create a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to create.

type: "create_file"

Create a new file with the provided diff.

DeleteFile = object { path, type }

Instruction describing how to delete a file via the apply_patch tool.

path: string

Path of the file to delete.

type: "delete_file"

Delete the specified file.

UpdateFile = object { diff, path, type }

Instruction describing how to update a file via the apply_patch tool.

diff: string

Diff to apply.

path: string

Path of the file to update.

type: "update_file"

Update an existing file with the provided diff.

status: "in_progress" or "completed"

The status of the apply patch tool call. One of in_progress or completed.

One of the following:
"in_progress"
"completed"
type: "apply_patch_call"

The type of the item. Always apply_patch_call.

created_by: optional string

The ID of the entity that created this tool call.

ApplyPatchCallOutput = object { id, call_id, status, 3 more }

The output emitted by an apply patch tool call.

id: string

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

call_id: string

The unique ID of the apply patch tool call generated by the model.

status: "completed" or "failed"

The status of the apply patch tool call output. One of completed or failed.

One of the following:
"completed"
"failed"
type: "apply_patch_call_output"

The type of the item. Always apply_patch_call_output.

created_by: optional string

The ID of the entity that created this tool call output.

output: optional string

Optional textual output returned by the apply patch tool.

McpListTools = object { id, server_label, tools, 2 more }

A list of tools available on an MCP server.

id: string

The unique ID of the list.

server_label: string

The label of the MCP server.

tools: array of object { input_schema, name, annotations, description }

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations: optional unknown

Additional annotations about the tool.

description: optional string

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

error: optional string

Error message if the server could not list tools.

McpApprovalRequest = object { id, arguments, name, 2 more }

A request for human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

McpApprovalResponse = object { id, approval_request_id, approve, 2 more }

A response to an MCP approval request.

id: string

The unique ID of the approval response

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

reason: optional string

Optional reason for the decision.

McpCall = object { id, arguments, name, 6 more }

An invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id: optional string

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error: optional string

The error from the tool call, if any.

output: optional string

The output from the tool call.

status: optional "in_progress" or "completed" or "incomplete" or 2 more

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

One of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
CustomToolCall = object { call_id, input, name, 3 more }

A call to a custom tool created by the model.

call_id: string

An identifier used to map this custom tool call to a tool call output.

input: string

The input for the custom tool call generated by the model.

name: string

The name of the custom tool being called.

type: "custom_tool_call"

The type of the custom tool call. Always custom_tool_call.

id: optional string

The unique ID of the custom tool call in the OpenAI platform.

namespace: optional string

The namespace of the custom tool being called.

CustomToolCallOutput = object { call_id, output, type, id }

The output of a custom tool call from your code, being sent back to the model.

call_id: string

The call ID, used to map this custom tool call output to a custom tool call.

output: string or array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

The output from the custom tool call generated by your code. Can be a string or an list of output content.

One of the following:
StringOutput = string

A string of the output of the custom tool call.

OutputContentList = array of ResponseInputText { text, type } or ResponseInputImage { detail, type, file_id, image_url } or ResponseInputFile { type, file_data, file_id, 2 more }

Text, image, or file output of the custom tool call.

One of the following:
ResponseInputText = object { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

ResponseInputImage = object { detail, type, file_id, image_url }

An image input to the model. Learn about image inputs.

detail: "low" or "high" or "auto" or "original"

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

One of the following:
"low"
"high"
"auto"
"original"
type: "input_image"

The type of the input item. Always input_image.

file_id: optional string

The ID of the file to be sent to the model.

image_url: optional string

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

ResponseInputFile = object { type, file_data, file_id, 2 more }

A file input to the model.

type: "input_file"

The type of the input item. Always input_file.

file_data: optional string

The content of the file to be sent to the model.

file_id: optional string

The ID of the file to be sent to the model.

file_url: optional string

The URL of the file to be sent to the model.

filename: optional string

The name of the file to be sent to the model.

type: "custom_tool_call_output"

The type of the custom tool call output. Always custom_tool_call_output.

id: optional string

The unique ID of the custom tool call output in the OpenAI platform.

first_id: string

The ID of the first item in the list.

has_more: boolean

Whether there are more items available.

last_id: string

The ID of the last item in the list.

object: "list"

The type of object returned, must be list.