Get input token counts
Returns input token counts of the request.
Returns an object with object set to response.input_tokens and an input_tokens count.
ParametersExpand Collapse
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
Text, image, or file inputs to the model, used to generate a response
Text, image, or file inputs to the model, used to generate a response
Iterable[ResponseInputItemParam]A list of one or many input items to the model, containing different content types.
A list of one or many input items to the model, containing different content types.
class EasyInputMessage: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
List[ResponseInputContent]
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: Literal["user", "assistant", "system", "developer"]The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: Optional[Literal["commentary", "final_answer"]]Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
class Message: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content
types.
A list of one or many input items to the model, containing different content types.
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: Literal["user", "system", "developer"]The role of the message input. One of user, system, or developer.
The role of the message input. One of user, system, or developer.
class ResponseOutputMessage: …An output message from the model.
An output message from the model.
content: List[Content]The content of the output message.
The content of the output message.
class ResponseOutputText: …A text output from the model.
A text output from the model.
annotations: List[Annotation]The annotations of the text output.
The annotations of the text output.
status: Literal["in_progress", "completed", "incomplete"]The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
phase: Optional[Literal["commentary", "final_answer"]]Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
class ResponseFileSearchToolCall: …The results of a file search tool call. See the
file search guide for more information.
The results of a file search tool call. See the file search guide for more information.
status: Literal["in_progress", "searching", "completed", 2 more]The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
results: Optional[List[Result]]The results of the file search tool call.
The results of the file search tool call.
attributes: Optional[Dict[str, Union[str, float, bool]]]Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
class ResponseComputerToolCall: …A tool call to a computer use tool. See the
computer use guide for more information.
A tool call to a computer use tool. See the computer use guide for more information.
status: Literal["in_progress", "completed", "incomplete"]The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
action: Optional[Action]A click action.
A click action.
class ActionClick: …A click action.
A click action.
class ActionDoubleClick: …A double click action.
A double click action.
class ActionDrag: …A drag action.
A drag action.
class ActionScroll: …A scroll action.
A scroll action.
actions: Optional[ComputerActionList]Flattened batched actions for computer_use. Each action includes an
type discriminator and action-specific fields.
Flattened batched actions for computer_use. Each action includes an
type discriminator and action-specific fields.
class Click: …A click action.
A click action.
class DoubleClick: …A double click action.
A double click action.
class Drag: …A drag action.
A drag action.
class Scroll: …A scroll action.
A scroll action.
class ComputerCallOutput: …The output of a computer tool call.
The output of a computer tool call.
The type of the computer tool call output. Always computer_call_output.
class ResponseFunctionWebSearch: …The results of a web search tool call. See the
web search guide for more information.
The results of a web search tool call. See the web search guide for more information.
action: ActionAn object describing the specific action taken in this web search call.
Includes details on how the model used the web (search, open_page, find_in_page).
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).
class ResponseFunctionToolCall: …A tool call to run a function. See the
function calling guide for more information.
A tool call to run a function. See the function calling guide for more information.
class FunctionCallOutput: …The output of a function tool call.
The output of a function tool call.
Text, image, or file output of the function tool call.
Text, image, or file output of the function tool call.
class ResponseInputImageContent: …An image input to the model. Learn about image inputs
An image input to the model. Learn about image inputs
The type of the function tool call output. Always function_call_output.
class ToolSearchCall: …
class ResponseToolSearchOutputItemParam: …
The loaded tool definitions returned by the tool search output.
The loaded tool definitions returned by the tool search output.
class FunctionTool: …Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
class FileSearchTool: …A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
filters: Optional[Filters]A filter to apply.
A filter to apply.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
class CompoundFilter: …Combine multiple filters using and or or.
Combine multiple filters using and or or.
filters: List[Filter]Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: Optional[RankingOptions]Ranking options for search.
Ranking options for search.
class ComputerTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class ComputerUsePreviewTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class WebSearchTool: …Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: Literal["web_search", "web_search_2025_08_26"]The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
class Mcp: …Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: Optional[McpAllowedTools]List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
class McpAllowedToolsMcpToolFilter: …A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: Optional[McpRequireApproval]Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
class McpRequireApprovalMcpToolApprovalFilter: …Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
class CodeInterpreter: …A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: CodeInterpreterContainerThe code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
class CodeInterpreterContainerCodeInterpreterToolAuto: …Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: Optional[CodeInterpreterContainerCodeInterpreterToolAutoNetworkPolicy]Network access policy for the container.
Network access policy for the container.
class ImageGeneration: …A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: Optional[Literal["generate", "edit", "auto"]]Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: Optional[Literal["transparent", "opaque", "auto"]]Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: Optional[Literal["high", "low"]]Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: Optional[ImageGenerationInputImageMask]Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
moderation: Optional[Literal["auto", "low"]]Moderation level for the generated image. Default: auto.
Moderation level for the generated image. Default: auto.
Compression level for the output image. Default: 100.
output_format: Optional[Literal["png", "webp", "jpeg"]]The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
class FunctionShellTool: …A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: Optional[Environment]
class ContainerAuto: …
network_policy: Optional[NetworkPolicy]Network access policy for the container.
Network access policy for the container.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class NamespaceTool: …Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: List[Tool]The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class ToolSearchTool: …Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
class WebSearchPreviewTool: …This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: Literal["web_search_preview", "web_search_preview_2025_03_11"]The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
The unique ID of the tool search call generated by the model.
class ResponseReasoningItem: …A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
class ResponseCompactionItemParam: …A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
class ImageGenerationCall: …An image generation request made by the model.
An image generation request made by the model.
class ResponseCodeInterpreterToolCall: …A tool call to run code.
A tool call to run code.
outputs: Optional[List[Output]]The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
class LocalShellCall: …A tool call to run a command on the local shell.
A tool call to run a command on the local shell.
class LocalShellCallOutput: …The output of a local shell tool call.
The output of a local shell tool call.
class ShellCall: …A tool representing a request to execute one or more shell commands.
A tool representing a request to execute one or more shell commands.
The unique ID of the shell tool call. Populated when this item is returned via API.
class ShellCallOutput: …The streamed output items emitted by a shell tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The unique ID of the shell tool call output. Populated when this item is returned via API.
class ApplyPatchCall: …A tool call representing a request to create, delete, or update files using diff patches.
A tool call representing a request to create, delete, or update files using diff patches.
The unique ID of the apply patch tool call generated by the model.
operation: ApplyPatchCallOperationThe specific create, delete, or update instruction for the apply_patch tool call.
The specific create, delete, or update instruction for the apply_patch tool call.
class ApplyPatchCallOperationCreateFile: …Instruction for creating a new file via the apply_patch tool.
Instruction for creating a new file via the apply_patch tool.
class ApplyPatchCallOperationDeleteFile: …Instruction for deleting an existing file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
class ApplyPatchCallOutput: …The streamed output emitted by an apply patch tool call.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call generated by the model.
status: Literal["completed", "failed"]The status of the apply patch tool call output. One of completed or failed.
The status of the apply patch tool call output. One of completed or failed.
class McpListTools: …A list of tools available on an MCP server.
A list of tools available on an MCP server.
class McpCall: …An invocation of a tool on an MCP server.
An invocation of a tool on an MCP server.
class ResponseCustomToolCallOutput: …The output of a custom tool call from your code, being sent back to the model.
The output of a custom tool call from your code, being sent back to the model.
output: Union[str, List[OutputOutputContentList]]The output from the custom tool call generated by your code.
Can be a string or an list of output content.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
List[OutputOutputContentList]Text, image, or file output of the custom tool call.
Text, image, or file output of the custom tool call.
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
A system (or developer) message inserted into the model's context.
When used along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.
Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.
The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.
reasoning: Optional[Reasoning]gpt-5 and o-series models only Configuration options for reasoning models.
gpt-5 and o-series models only Configuration options for reasoning models.
effort: Optional[ReasoningEffort]Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
- All models before
gpt-5.1 default to medium reasoning effort, and do not support none.
- The
gpt-5-pro model defaults to (and only supports) high reasoning effort.
xhigh is supported for all models after gpt-5.1-codex-max.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
Deprecatedgenerate_summary: Optional[Literal["auto", "concise", "detailed"]]Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
summary: Optional[Literal["auto", "concise", "detailed"]]A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
concise is supported for computer-use-preview models and all reasoning models after gpt-5.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
concise is supported for computer-use-preview models and all reasoning models after gpt-5.
Configuration options for a text response from the model. Can be plain
text or structured JSON data. Learn more:
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
class ResponseFormatTextJSONSchemaConfig: …JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
Controls which tool the model should use, if any.
Controls which tool the model should use, if any.
class ToolChoiceAllowed: …Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
mode: Literal["auto", "required"]Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
class ToolChoiceTypes: …Indicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
type: Literal["file_search", "web_search_preview", "computer", 5 more]The type of hosted tool the model should to use. Learn more about
built-in tools.
Allowed values are:
file_search
web_search_preview
computer
computer_use_preview
computer_use
code_interpreter
image_generation
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputercomputer_use_previewcomputer_usecode_interpreterimage_generation
class ToolChoiceMcp: …Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific tool on a remote MCP server.
An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.
An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.
class FunctionTool: …Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
class FileSearchTool: …A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
filters: Optional[Filters]A filter to apply.
A filter to apply.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
class CompoundFilter: …Combine multiple filters using and or or.
Combine multiple filters using and or or.
filters: List[Filter]Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: Optional[RankingOptions]Ranking options for search.
Ranking options for search.
class ComputerTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class ComputerUsePreviewTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class WebSearchTool: …Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: Literal["web_search", "web_search_2025_08_26"]The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
class Mcp: …Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: Optional[McpAllowedTools]List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
class McpAllowedToolsMcpToolFilter: …A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: Optional[McpRequireApproval]Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
class McpRequireApprovalMcpToolApprovalFilter: …Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
class CodeInterpreter: …A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: CodeInterpreterContainerThe code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
class CodeInterpreterContainerCodeInterpreterToolAuto: …Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: Optional[CodeInterpreterContainerCodeInterpreterToolAutoNetworkPolicy]Network access policy for the container.
Network access policy for the container.
class ImageGeneration: …A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: Optional[Literal["generate", "edit", "auto"]]Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: Optional[Literal["transparent", "opaque", "auto"]]Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: Optional[Literal["high", "low"]]Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: Optional[ImageGenerationInputImageMask]Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
moderation: Optional[Literal["auto", "low"]]Moderation level for the generated image. Default: auto.
Moderation level for the generated image. Default: auto.
Compression level for the output image. Default: 100.
output_format: Optional[Literal["png", "webp", "jpeg"]]The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
class FunctionShellTool: …A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: Optional[Environment]
class ContainerAuto: …
network_policy: Optional[NetworkPolicy]Network access policy for the container.
Network access policy for the container.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class NamespaceTool: …Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: List[Tool]The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class ToolSearchTool: …Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
class WebSearchPreviewTool: …This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: Literal["web_search_preview", "web_search_preview_2025_03_11"]The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The user's location.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
truncation: Optional[Literal["auto", "disabled"]]The truncation strategy to use for the model response. - auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation. - disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
The truncation strategy to use for the model response. - auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation. - disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Get input token counts
from openai import OpenAI
client = OpenAI()
response = client.responses.input_tokens.count(
model="gpt-5",
input="Tell me a joke."
)
print(response.input_tokens)
{
"object": "response.input_tokens",
"input_tokens": 11
}
Returns Examples
{
"object": "response.input_tokens",
"input_tokens": 11
}