Skip to content

Get input token counts

responses.input_tokens.count(InputTokenCountParams**kwargs) -> InputTokenCountResponse
POST/responses/input_tokens

Get input token counts

ParametersExpand Collapse
conversation: Optional[Conversation]

The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request. Input items and output items from this response are automatically added to this conversation after this response completes.

Accepts one of the following:
str

The unique ID of the conversation.

class ResponseConversationParam:

The conversation that this response belongs to.

id: str

The unique ID of the conversation.

input: Optional[Union[str, Iterable[ResponseInputItemParam], null]]

Text, image, or file inputs to the model, used to generate a response

Accepts one of the following:
str

A text input to the model, equivalent to a text input with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
class EasyInputMessage:

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: Union[str, ResponseInputMessageContentList]

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
str

A text input to the model.

Accepts one of the following:
class ResponseInputText:

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class ResponseInputImage:

An image input to the model. Learn about image inputs.

detail: Literal["low", "high", "auto"]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: Literal["input_image"]

The type of the input item. Always input_image.

file_id: Optional[str]

The ID of the file to be sent to the model.

image_url: Optional[str]

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

class ResponseInputFile:

A file input to the model.

type: Literal["input_file"]

The type of the input item. Always input_file.

file_data: Optional[str]

The content of the file to be sent to the model.

file_id: Optional[str]

The ID of the file to be sent to the model.

file_url: Optional[str]

The URL of the file to be sent to the model.

filename: Optional[str]

The name of the file to be sent to the model.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

class Message:

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role.

A list of one or many input items to the model, containing different content types.

Accepts one of the following:
class ResponseInputText:

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class ResponseInputImage:

An image input to the model. Learn about image inputs.

detail: Literal["low", "high", "auto"]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: Literal["input_image"]

The type of the input item. Always input_image.

file_id: Optional[str]

The ID of the file to be sent to the model.

image_url: Optional[str]

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

class ResponseInputFile:

A file input to the model.

type: Literal["input_file"]

The type of the input item. Always input_file.

file_data: Optional[str]

The content of the file to be sent to the model.

file_id: Optional[str]

The ID of the file to be sent to the model.

file_url: Optional[str]

The URL of the file to be sent to the model.

filename: Optional[str]

The name of the file to be sent to the model.

role: Literal["user", "system", "developer"]

The role of the message input. One of user, system, or developer.

Accepts one of the following:
"user"
"system"
"developer"
status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: Optional[Literal["message"]]

The type of the message input. Always set to message.

class ResponseOutputMessage:

An output message from the model.

id: str

The unique ID of the output message.

content: List[Content]

The content of the output message.

Accepts one of the following:
class ResponseOutputText:

A text output from the model.

annotations: List[Annotation]

The annotations of the text output.

Accepts one of the following:
class AnnotationFileCitation:

A citation to a file.

file_id: str

The ID of the file.

filename: str

The filename of the file cited.

index: int

The index of the file in the list of files.

type: Literal["file_citation"]

The type of the file citation. Always file_citation.

class AnnotationURLCitation:

A citation for a web resource used to generate a model response.

end_index: int

The index of the last character of the URL citation in the message.

start_index: int

The index of the first character of the URL citation in the message.

title: str

The title of the web resource.

type: Literal["url_citation"]

The type of the URL citation. Always url_citation.

url: str

The URL of the web resource.

class AnnotationContainerFileCitation:

A citation for a container file used to generate a model response.

container_id: str

The ID of the container file.

end_index: int

The index of the last character of the container file citation in the message.

file_id: str

The ID of the file.

filename: str

The filename of the container file cited.

start_index: int

The index of the first character of the container file citation in the message.

type: Literal["container_file_citation"]

The type of the container file citation. Always container_file_citation.

class AnnotationFilePath:

A path to a file.

file_id: str

The ID of the file.

index: int

The index of the file in the list of files.

type: Literal["file_path"]

The type of the file path. Always file_path.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

logprobs: Optional[List[Logprob]]
token: str
bytes: List[int]
logprob: float
top_logprobs: List[LogprobTopLogprob]
token: str
bytes: List[int]
logprob: float
class ResponseOutputRefusal:

A refusal from the model.

refusal: str

The refusal explanation from the model.

type: Literal["refusal"]

The type of the refusal. Always refusal.

role: Literal["assistant"]

The role of the output message. Always assistant.

status: Literal["in_progress", "completed", "incomplete"]

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: Literal["message"]

The type of the output message. Always message.

class ResponseFileSearchToolCall:

The results of a file search tool call. See the file search guide for more information.

id: str

The unique ID of the file search tool call.

queries: List[str]

The queries used to search for files.

status: Literal["in_progress", "searching", "completed", 2 more]

The status of the file search tool call. One of in_progress, searching, incomplete or failed,

Accepts one of the following:
"in_progress"
"searching"
"completed"
"incomplete"
"failed"
type: Literal["file_search_call"]

The type of the file search tool call. Always file_search_call.

results: Optional[List[Result]]

The results of the file search tool call.

attributes: Optional[Dict[str, Union[str, float, bool]]]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

Accepts one of the following:
str
float
bool
file_id: Optional[str]

The unique ID of the file.

filename: Optional[str]

The name of the file.

score: Optional[float]

The relevance score of the file - a value between 0 and 1.

formatfloat
text: Optional[str]

The text that was retrieved from the file.

class ResponseComputerToolCall:

A tool call to a computer use tool. See the computer use guide for more information.

id: str

The unique ID of the computer call.

action: Action

A click action.

Accepts one of the following:
class ActionClick:

A click action.

button: Literal["left", "right", "wheel", 2 more]

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

Accepts one of the following:
"left"
"right"
"wheel"
"back"
"forward"
type: Literal["click"]

Specifies the event type. For a click action, this property is always click.

x: int

The x-coordinate where the click occurred.

y: int

The y-coordinate where the click occurred.

class ActionDoubleClick:

A double click action.

type: Literal["double_click"]

Specifies the event type. For a double click action, this property is always set to double_click.

x: int

The x-coordinate where the double click occurred.

y: int

The y-coordinate where the double click occurred.

class ActionDrag:

A drag action.

path: List[ActionDragPath]

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: int

The x-coordinate.

y: int

The y-coordinate.

type: Literal["drag"]

Specifies the event type. For a drag action, this property is always set to drag.

class ActionKeypress:

A collection of keypresses the model would like to perform.

keys: List[str]

The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key.

type: Literal["keypress"]

Specifies the event type. For a keypress action, this property is always set to keypress.

class ActionMove:

A mouse move action.

type: Literal["move"]

Specifies the event type. For a move action, this property is always set to move.

x: int

The x-coordinate to move to.

y: int

The y-coordinate to move to.

class ActionScreenshot:

A screenshot action.

type: Literal["screenshot"]

Specifies the event type. For a screenshot action, this property is always set to screenshot.

class ActionScroll:

A scroll action.

scroll_x: int

The horizontal scroll distance.

scroll_y: int

The vertical scroll distance.

type: Literal["scroll"]

Specifies the event type. For a scroll action, this property is always set to scroll.

x: int

The x-coordinate where the scroll occurred.

y: int

The y-coordinate where the scroll occurred.

class ActionType:

An action to type in text.

text: str

The text to type.

type: Literal["type"]

Specifies the event type. For a type action, this property is always set to type.

class ActionWait:

A wait action.

type: Literal["wait"]

Specifies the event type. For a wait action, this property is always set to wait.

call_id: str

An identifier used when responding to the tool call with output.

pending_safety_checks: List[PendingSafetyCheck]

The pending safety checks for the computer call.

id: str

The ID of the pending safety check.

code: Optional[str]

The type of the pending safety check.

message: Optional[str]

Details about the pending safety check.

status: Literal["in_progress", "completed", "incomplete"]

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: Literal["computer_call"]

The type of the computer call. Always computer_call.

class ComputerCallOutput:

The output of a computer tool call.

call_id: str

The ID of the computer tool call that produced the output.

maxLength64
minLength1

A computer screenshot image used with the computer use tool.

type: Literal["computer_screenshot"]

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id: Optional[str]

The identifier of an uploaded file that contains the screenshot.

image_url: Optional[str]

The URL of the screenshot image.

type: Literal["computer_call_output"]

The type of the computer tool call output. Always computer_call_output.

id: Optional[str]

The ID of the computer tool call output.

acknowledged_safety_checks: Optional[List[ComputerCallOutputAcknowledgedSafetyCheck]]

The safety checks reported by the API that have been acknowledged by the developer.

id: str

The ID of the pending safety check.

code: Optional[str]

The type of the pending safety check.

message: Optional[str]

Details about the pending safety check.

status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
Accepts one of the following:
Accepts one of the following:
class ResponseFunctionToolCall:

A tool call to run a function. See the function calling guide for more information.

arguments: str

A JSON string of the arguments to pass to the function.

call_id: str

The unique ID of the function tool call generated by the model.

name: str

The name of the function to run.

type: Literal["function_call"]

The type of the function tool call. Always function_call.

id: Optional[str]

The unique ID of the function tool call.

status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
class FunctionCallOutput:

The output of a function tool call.

call_id: str

The unique ID of the function tool call generated by the model.

maxLength64
minLength1

Text, image, or file output of the function tool call.

Accepts one of the following:
str

A JSON string of the output of the function tool call.

Accepts one of the following:
class ResponseInputTextContent:

A text input to the model.

text: str

The text input to the model.

maxLength10485760
type: Literal["input_text"]

The type of the input item. Always input_text.

class ResponseInputImageContent:

An image input to the model. Learn about image inputs

type: Literal["input_image"]

The type of the input item. Always input_image.

detail: Optional[Literal["low", "high", "auto"]]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
file_id: Optional[str]

The ID of the file to be sent to the model.

image_url: Optional[str]

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

maxLength20971520
class ResponseInputFileContent:

A file input to the model.

type: Literal["input_file"]

The type of the input item. Always input_file.

file_data: Optional[str]

The base64-encoded data of the file to be sent to the model.

maxLength33554432
file_id: Optional[str]

The ID of the file to be sent to the model.

file_url: Optional[str]

The URL of the file to be sent to the model.

filename: Optional[str]

The name of the file to be sent to the model.

type: Literal["function_call_output"]

The type of the function tool call output. Always function_call_output.

id: Optional[str]

The unique ID of the function tool call output. Populated when this item is returned via API.

status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
class ResponseReasoningItem:

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.

id: str

The unique identifier of the reasoning content.

summary: List[Summary]

Reasoning summary content.

text: str

A summary of the reasoning output from the model so far.

type: Literal["summary_text"]

The type of the object. Always summary_text.

type: Literal["reasoning"]

The type of the object. Always reasoning.

content: Optional[List[Content]]

Reasoning text content.

text: str

The reasoning text from the model.

type: Literal["reasoning_text"]

The type of the reasoning text. Always reasoning_text.

encrypted_content: Optional[str]

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of the item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
class ResponseCompactionItemParam:

A compaction item generated by the v1/responses/compact API.

encrypted_content: str

The encrypted content of the compaction summary.

maxLength10485760
type: Literal["compaction"]

The type of the item. Always compaction.

id: Optional[str]

The ID of the compaction item.

class ImageGenerationCall:

An image generation request made by the model.

id: str

The unique ID of the image generation call.

result: Optional[str]

The generated image encoded in base64.

status: Literal["in_progress", "completed", "generating", "failed"]

The status of the image generation call.

Accepts one of the following:
"in_progress"
"completed"
"generating"
"failed"
type: Literal["image_generation_call"]

The type of the image generation call. Always image_generation_call.

class ResponseCodeInterpreterToolCall:

A tool call to run code.

id: str

The unique ID of the code interpreter tool call.

code: Optional[str]

The code to run, or null if not available.

container_id: str

The ID of the container used to run the code.

outputs: Optional[List[Output]]

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

Accepts one of the following:
class OutputLogs:

The logs output from the code interpreter.

logs: str

The logs output from the code interpreter.

type: Literal["logs"]

The type of the output. Always logs.

class OutputImage:

The image output from the code interpreter.

type: Literal["image"]

The type of the output. Always image.

url: str

The URL of the image output from the code interpreter.

status: Literal["in_progress", "completed", "incomplete", 2 more]

The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"interpreting"
"failed"
type: Literal["code_interpreter_call"]

The type of the code interpreter tool call. Always code_interpreter_call.

class LocalShellCall:

A tool call to run a command on the local shell.

id: str

The unique ID of the local shell call.

action: LocalShellCallAction

Execute a shell command on the server.

command: List[str]

The command to run.

env: Dict[str, str]

Environment variables to set for the command.

type: Literal["exec"]

The type of the local shell action. Always exec.

timeout_ms: Optional[int]

Optional timeout in milliseconds for the command.

user: Optional[str]

Optional user to run the command as.

working_directory: Optional[str]

Optional working directory to run the command in.

call_id: str

The unique ID of the local shell tool call generated by the model.

status: Literal["in_progress", "completed", "incomplete"]

The status of the local shell call.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
type: Literal["local_shell_call"]

The type of the local shell call. Always local_shell_call.

class LocalShellCallOutput:

The output of a local shell tool call.

id: str

The unique ID of the local shell tool call generated by the model.

output: str

A JSON string of the output of the local shell tool call.

type: Literal["local_shell_call_output"]

The type of the local shell tool call output. Always local_shell_call_output.

status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of the item. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
class ShellCall:

A tool representing a request to execute one or more shell commands.

action: ShellCallAction

The shell commands and limits that describe how to run the tool call.

commands: List[str]

Ordered shell commands for the execution environment to run.

max_output_length: Optional[int]

Maximum number of UTF-8 characters to capture from combined stdout and stderr output.

timeout_ms: Optional[int]

Maximum wall-clock time in milliseconds to allow the shell commands to run.

call_id: str

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1
type: Literal["shell_call"]

The type of the item. Always shell_call.

id: Optional[str]

The unique ID of the shell tool call. Populated when this item is returned via API.

status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of the shell call. One of in_progress, completed, or incomplete.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
class ShellCallOutput:

The streamed output items emitted by a shell tool call.

call_id: str

The unique ID of the shell tool call generated by the model.

maxLength64
minLength1

Captured chunks of stdout and stderr output, along with their associated outcomes.

outcome: Outcome

The exit or timeout outcome associated with this shell call.

Accepts one of the following:
class OutcomeTimeout:

Indicates that the shell call exceeded its configured time limit.

type: Literal["timeout"]

The outcome type. Always timeout.

class OutcomeExit:

Indicates that the shell commands finished and returned an exit code.

exit_code: int

The exit code returned by the shell process.

type: Literal["exit"]

The outcome type. Always exit.

stderr: str

Captured stderr output for the shell call.

maxLength10485760
stdout: str

Captured stdout output for the shell call.

maxLength10485760
type: Literal["shell_call_output"]

The type of the item. Always shell_call_output.

id: Optional[str]

The unique ID of the shell tool call output. Populated when this item is returned via API.

max_output_length: Optional[int]

The maximum number of UTF-8 characters captured for this shell call's combined output.

status: Optional[Literal["in_progress", "completed", "incomplete"]]

The status of the shell call output.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
class ApplyPatchCall:

A tool call representing a request to create, delete, or update files using diff patches.

call_id: str

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
operation: ApplyPatchCallOperation

The specific create, delete, or update instruction for the apply_patch tool call.

Accepts one of the following:
class ApplyPatchCallOperationCreateFile:

Instruction for creating a new file via the apply_patch tool.

diff: str

Unified diff content to apply when creating the file.

maxLength10485760
path: str

Path of the file to create relative to the workspace root.

minLength1
type: Literal["create_file"]

The operation type. Always create_file.

class ApplyPatchCallOperationDeleteFile:

Instruction for deleting an existing file via the apply_patch tool.

path: str

Path of the file to delete relative to the workspace root.

minLength1
type: Literal["delete_file"]

The operation type. Always delete_file.

class ApplyPatchCallOperationUpdateFile:

Instruction for updating an existing file via the apply_patch tool.

diff: str

Unified diff content to apply to the existing file.

maxLength10485760
path: str

Path of the file to update relative to the workspace root.

minLength1
type: Literal["update_file"]

The operation type. Always update_file.

status: Literal["in_progress", "completed"]

The status of the apply patch tool call. One of in_progress or completed.

Accepts one of the following:
"in_progress"
"completed"
type: Literal["apply_patch_call"]

The type of the item. Always apply_patch_call.

id: Optional[str]

The unique ID of the apply patch tool call. Populated when this item is returned via API.

class ApplyPatchCallOutput:

The streamed output emitted by an apply patch tool call.

call_id: str

The unique ID of the apply patch tool call generated by the model.

maxLength64
minLength1
status: Literal["completed", "failed"]

The status of the apply patch tool call output. One of completed or failed.

Accepts one of the following:
"completed"
"failed"
type: Literal["apply_patch_call_output"]

The type of the item. Always apply_patch_call_output.

id: Optional[str]

The unique ID of the apply patch tool call output. Populated when this item is returned via API.

output: Optional[str]

Optional human-readable log text from the apply patch tool (e.g., patch results or errors).

maxLength10485760
class McpListTools:

A list of tools available on an MCP server.

id: str

The unique ID of the list.

server_label: str

The label of the MCP server.

tools: List[McpListToolsTool]

The tools available on the server.

input_schema: object

The JSON schema describing the tool's input.

name: str

The name of the tool.

annotations: Optional[object]

Additional annotations about the tool.

description: Optional[str]

The description of the tool.

type: Literal["mcp_list_tools"]

The type of the item. Always mcp_list_tools.

error: Optional[str]

Error message if the server could not list tools.

class McpApprovalRequest:

A request for human approval of a tool invocation.

id: str

The unique ID of the approval request.

arguments: str

A JSON string of arguments for the tool.

name: str

The name of the tool to run.

server_label: str

The label of the MCP server making the request.

type: Literal["mcp_approval_request"]

The type of the item. Always mcp_approval_request.

class McpApprovalResponse:

A response to an MCP approval request.

approval_request_id: str

The ID of the approval request being answered.

approve: bool

Whether the request was approved.

type: Literal["mcp_approval_response"]

The type of the item. Always mcp_approval_response.

id: Optional[str]

The unique ID of the approval response

reason: Optional[str]

Optional reason for the decision.

class McpCall:

An invocation of a tool on an MCP server.

id: str

The unique ID of the tool call.

arguments: str

A JSON string of the arguments passed to the tool.

name: str

The name of the tool that was run.

server_label: str

The label of the MCP server running the tool.

type: Literal["mcp_call"]

The type of the item. Always mcp_call.

approval_request_id: Optional[str]

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error: Optional[str]

The error from the tool call, if any.

output: Optional[str]

The output from the tool call.

status: Optional[Literal["in_progress", "completed", "incomplete", 2 more]]

The status of the tool call. One of in_progress, completed, incomplete, calling, or failed.

Accepts one of the following:
"in_progress"
"completed"
"incomplete"
"calling"
"failed"
class ResponseCustomToolCallOutput:

The output of a custom tool call from your code, being sent back to the model.

call_id: str

The call ID, used to map this custom tool call output to a custom tool call.

output: Union[str, List[OutputOutputContentList]]

The output from the custom tool call generated by your code. Can be a string or an list of output content.

Accepts one of the following:
str

A string of the output of the custom tool call.

List[OutputOutputContentList]

Text, image, or file output of the custom tool call.

Accepts one of the following:
class ResponseInputText:

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class ResponseInputImage:

An image input to the model. Learn about image inputs.

detail: Literal["low", "high", "auto"]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: Literal["input_image"]

The type of the input item. Always input_image.

file_id: Optional[str]

The ID of the file to be sent to the model.

image_url: Optional[str]

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

class ResponseInputFile:

A file input to the model.

type: Literal["input_file"]

The type of the input item. Always input_file.

file_data: Optional[str]

The content of the file to be sent to the model.

file_id: Optional[str]

The ID of the file to be sent to the model.

file_url: Optional[str]

The URL of the file to be sent to the model.

filename: Optional[str]

The name of the file to be sent to the model.

type: Literal["custom_tool_call_output"]

The type of the custom tool call output. Always custom_tool_call_output.

id: Optional[str]

The unique ID of the custom tool call output in the OpenAI platform.

class ResponseCustomToolCall:

A call to a custom tool created by the model.

call_id: str

An identifier used to map this custom tool call to a tool call output.

input: str

The input for the custom tool call generated by the model.

name: str

The name of the custom tool being called.

type: Literal["custom_tool_call"]

The type of the custom tool call. Always custom_tool_call.

id: Optional[str]

The unique ID of the custom tool call in the OpenAI platform.

class ItemReference:

An internal identifier for an item to reference.

id: str

The ID of the item to reference.

type: Optional[Literal["item_reference"]]

The type of item to reference. Always item_reference.

instructions: Optional[str]

A system (or developer) message inserted into the model's context. When used along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

model: Optional[str]

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

parallel_tool_calls: Optional[bool]

Whether to allow the model to run tool calls in parallel.

previous_response_id: Optional[str]

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

reasoning: Optional[Reasoning]

gpt-5 and o-series models only Configuration options for reasoning models.

effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
Deprecatedgenerate_summary: Optional[Literal["auto", "concise", "detailed"]]

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

Accepts one of the following:
"auto"
"concise"
"detailed"
summary: Optional[Literal["auto", "concise", "detailed"]]

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Accepts one of the following:
"auto"
"concise"
"detailed"
text: Optional[Text]

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
class ResponseFormatText:

Default response format. Used to generate text responses.

type: Literal["text"]

The type of response format being defined. Always text.

class ResponseFormatTextJSONSchemaConfig:

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: str

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Dict[str, object]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: Literal["json_schema"]

The type of response format being defined. Always json_schema.

description: Optional[str]

A description of what the response format is for, used by the model to determine how to respond in the format.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

class ResponseFormatJSONObject:

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: Literal["json_object"]

The type of response format being defined. Always json_object.

verbosity: Optional[Literal["low", "medium", "high"]]

Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

Accepts one of the following:
"low"
"medium"
"high"
tool_choice: Optional[ToolChoice]

Controls which tool the model should use, if any.

Accepts one of the following:
Literal["none", "auto", "required"]
Accepts one of the following:
"none"
"auto"
"required"
class ToolChoiceAllowed:

Constrains the tools available to the model to a pre-defined set.

mode: Literal["auto", "required"]

Constrains the tools available to the model to a pre-defined set.

auto allows the model to pick from among the allowed tools and generate a message.

required requires the model to call one or more of the allowed tools.

Accepts one of the following:
"auto"
"required"
tools: List[Dict[str, object]]

A list of tool definitions that the model should be allowed to call.

For the Responses API, the list of tool definitions might look like:

[
  { "type": "function", "name": "get_weather" },
  { "type": "mcp", "server_label": "deepwiki" },
  { "type": "image_generation" }
]
type: Literal["allowed_tools"]

Allowed tool configuration type. Always allowed_tools.

class ToolChoiceTypes:

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.

type: Literal["file_search", "web_search_preview", "computer_use_preview", 3 more]

The type of hosted tool the model should to use. Learn more about built-in tools.

Allowed values are:

  • file_search
  • web_search_preview
  • computer_use_preview
  • code_interpreter
  • image_generation
Accepts one of the following:
"file_search"
"web_search_preview"
"computer_use_preview"
"web_search_preview_2025_03_11"
"image_generation"
"code_interpreter"
class ToolChoiceFunction:

Use this option to force the model to call a specific function.

name: str

The name of the function to call.

type: Literal["function"]

For function calling, the type is always function.

class ToolChoiceMcp:

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: str

The label of the MCP server to use.

type: Literal["mcp"]

For MCP tools, the type is always mcp.

name: Optional[str]

The name of the tool to call on the server.

class ToolChoiceCustom:

Use this option to force the model to call a specific custom tool.

name: str

The name of the custom tool to call.

type: Literal["custom"]

For custom tool calling, the type is always custom.

class ToolChoiceApplyPatch:

Forces the model to call the apply_patch tool when executing a tool call.

type: Literal["apply_patch"]

The tool to call. Always apply_patch.

class ToolChoiceShell:

Forces the model to call the shell tool when a tool call is required.

type: Literal["shell"]

The tool to call. Always shell.

tools: Optional[Iterable[ToolParam]]

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

Accepts one of the following:
class FunctionTool:

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: str

The name of the function to call.

parameters: Optional[Dict[str, object]]

A JSON schema object describing the parameters of the function.

strict: Optional[bool]

Whether to enforce strict parameter validation. Default true.

type: Literal["function"]

The type of the function tool. Always function.

description: Optional[str]

A description of the function. Used by the model to determine whether or not to call the function.

class FileSearchTool:

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: Literal["file_search"]

The type of the file search tool. Always file_search.

vector_store_ids: List[str]

The IDs of the vector stores to search.

filters: Optional[Filters]

A filter to apply.

Accepts one of the following:
class ComparisonFilter:

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: str

The key to compare against the value.

type: Literal["eq", "ne", "gt", 3 more]

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: Union[str, float, bool, List[Union[str, float]]]

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
str
float
bool
List[Union[str, float]]
Accepts one of the following:
str
float
class CompoundFilter:

Combine multiple filters using and or or.

filters: List[Filter]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
class ComparisonFilter:

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: str

The key to compare against the value.

type: Literal["eq", "ne", "gt", 3 more]

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: Union[str, float, bool, List[Union[str, float]]]

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
str
float
bool
List[Union[str, float]]
Accepts one of the following:
str
float
object
type: Literal["and", "or"]

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results: Optional[int]

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: Optional[RankingOptions]

Ranking options for search.

ranker: Optional[Literal["auto", "default-2024-11-15"]]

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold: Optional[float]

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

class ComputerTool:

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: int

The height of the computer display.

display_width: int

The width of the computer display.

environment: Literal["windows", "mac", "linux", 2 more]

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: Literal["computer_use_preview"]

The type of the computer use tool. Always computer_use_preview.

class WebSearchTool:

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: Literal["web_search", "web_search_2025_08_26"]

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters: Optional[Filters]

Filters for the search.

allowed_domains: Optional[List[str]]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size: Optional[Literal["low", "medium", "high"]]

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location: Optional[UserLocation]

The approximate location of the user.

city: Optional[str]

Free text input for the city of the user, e.g. San Francisco.

country: Optional[str]

The two-letter ISO country code of the user, e.g. US.

region: Optional[str]

Free text input for the region of the user, e.g. California.

timezone: Optional[str]

The IANA timezone of the user, e.g. America/Los_Angeles.

type: Optional[Literal["approximate"]]

The type of location approximation. Always approximate.

class Mcp:

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: str

A label for this MCP server, used to identify it in tool calls.

type: Literal["mcp"]

The type of the MCP tool. Always mcp.

allowed_tools: Optional[McpAllowedTools]

List of allowed tool names or a filter object.

Accepts one of the following:
List[str]

A string array of allowed tool names

class McpAllowedToolsMcpToolFilter:

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

authorization: Optional[str]

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers: Optional[Dict[str, str]]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: Optional[McpRequireApproval]

Specify which of the MCP server's tools require approval.

Accepts one of the following:
class McpRequireApprovalMcpToolApprovalFilter:

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

Literal["always", "never"]

Specify a single approval policy for all tools. One of always or never. When set to always, all tools will require approval. When set to never, all tools will not require approval.

Accepts one of the following:
"always"
"never"
server_description: Optional[str]

Optional description of the MCP server, used to provide more context.

server_url: Optional[str]

The URL for the MCP server. One of server_url or connector_id must be provided.

class CodeInterpreter:

A tool that runs Python code to help generate a response to a prompt.

container: CodeInterpreterContainer

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
str

The container ID.

class CodeInterpreterContainerCodeInterpreterToolAuto:

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: Literal["auto"]

Always auto.

file_ids: Optional[List[str]]

An optional list of uploaded files to make available to your code.

memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: Literal["code_interpreter"]

The type of the code interpreter tool. Always code_interpreter.

class ImageGeneration:

A tool that generates images using the GPT image models.

type: Literal["image_generation"]

The type of the image generation tool. Always image_generation.

action: Optional[Literal["generate", "edit", "auto"]]

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background: Optional[Literal["transparent", "opaque", "auto"]]

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity: Optional[Literal["high", "low"]]

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask: Optional[ImageGenerationInputImageMask]

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: Optional[str]

File ID for the mask image.

image_url: Optional[str]

Base64-encoded mask image.

model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
str
Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"]

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation: Optional[Literal["auto", "low"]]

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression: Optional[int]

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format: Optional[Literal["png", "webp", "jpeg"]]

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images: Optional[int]

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality: Optional[Literal["low", "medium", "high", "auto"]]

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: Optional[Literal["1024x1024", "1024x1536", "1536x1024", "auto"]]

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
class LocalShell:

A tool that allows the model to execute shell commands in a local environment.

type: Literal["local_shell"]

The type of the local shell tool. Always local_shell.

class FunctionShellTool:

A tool that allows the model to execute shell commands.

type: Literal["shell"]

The type of the shell tool. Always shell.

class CustomTool:

A custom tool that processes input using a specified format. Learn more about custom tools

name: str

The name of the custom tool, used to identify it in tool calls.

type: Literal["custom"]

The type of the custom tool. Always custom.

description: Optional[str]

Optional description of the custom tool, used to provide more context.

format: Optional[CustomToolInputFormat]

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
class Text:

Unconstrained free-form text.

type: Literal["text"]

Unconstrained text format. Always text.

class Grammar:

A grammar defined by the user.

definition: str

The grammar definition.

syntax: Literal["lark", "regex"]

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: Literal["grammar"]

Grammar format. Always grammar.

class WebSearchPreviewTool:

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: Literal["web_search_preview", "web_search_preview_2025_03_11"]

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size: Optional[Literal["low", "medium", "high"]]

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location: Optional[UserLocation]

The user's location.

type: Literal["approximate"]

The type of location approximation. Always approximate.

city: Optional[str]

Free text input for the city of the user, e.g. San Francisco.

country: Optional[str]

The two-letter ISO country code of the user, e.g. US.

region: Optional[str]

Free text input for the region of the user, e.g. California.

timezone: Optional[str]

The IANA timezone of the user, e.g. America/Los_Angeles.

class ApplyPatchTool:

Allows the assistant to create, delete, or update files using unified diffs.

type: Literal["apply_patch"]

The type of the tool. Always apply_patch.

truncation: Optional[Literal["auto", "disabled"]]

The truncation strategy to use for the model response. - auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation. - disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.

Accepts one of the following:
"auto"
"disabled"
ReturnsExpand Collapse
class InputTokenCountResponse:
input_tokens: int
object: Literal["response.input_tokens"]

Get input token counts

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY"),  # This is the default and can be omitted
)
response = client.responses.input_tokens.count()
print(response.input_tokens)
{
  "input_tokens": 123,
  "object": "response.input_tokens"
}
Returns Examples
{
  "input_tokens": 123,
  "object": "response.input_tokens"
}