Skip to content
Primary navigation

Responses

resource openai_response

optional Expand Collapse
conversation?: String

The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request. Input items and output items from this response are automatically added to this conversation after this response completes.

input?: String

Text, image, or file inputs to the model, used to generate a response.

Learn more:

instructions?: String

A system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

max_output_tokens?: Int64

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls?: Int64

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

model?: String

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

previous_response_id?: String

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt_cache_key?: String

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention?: String

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

safety_identifier?: String

A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

stream?: Bool

If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.

tool_choice?: String

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

top_logprobs?: Int64

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

Deprecateduser?: String

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

include?: List[String]

Specify additional output data to include in the model response. Currently supported values are:

  • web_search_call.action.sources: Include the sources of the web search tool call.
  • code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.
  • computer_call_output.output.image_url: Include image urls from the computer call output.
  • file_search_call.results: Include the search results of the file search tool call.
  • message.input_image.image_url: Include image urls from the input message.
  • message.output_text.logprobs: Include logprobs with assistant messages.
  • reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).
metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

context_management?: List[Attributes]

Context management configuration for this request.

type: String

The context management entry type. Currently only ‘compaction’ is supported.

compact_threshold?: Int64

Token threshold at which compaction should be triggered for this entry.

prompt?: Attributes

Reference to a prompt template and its variables. Learn more.

id: String

The unique identifier of the prompt template to use.

variables?: Map[String]

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

version?: String

Optional version of the prompt template.

stream_options?: Attributes

Options for streaming responses. Only set this when you set stream: true.

include_obfuscation?: Bool

When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API.

background?: Bool

Whether to run the model response in the background. Learn more.

parallel_tool_calls?: Bool

Whether to allow the model to run tool calls in parallel.

service_tier?: String

Specifies the processing type used for serving the request.

  • If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
  • If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
  • If set to ‘flex’ or ‘priority’, then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is ‘auto’.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

store?: Bool

Whether to store the generated model response for later retrieval via API.

temperature?: Float64

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

top_p?: Float64

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

truncation?: String

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
reasoning?: Attributes

gpt-5 and o-series models only

Configuration options for reasoning models.

effort?: String

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Deprecatedgenerate_summary?: String

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.

summary?: String

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

text?: Attributes

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

format?: Attributes

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

type: String

The type of response format being defined. Always text.

name?: String

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema?: Map[JSON]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

description?: String

A description of what the response format is for, used by the model to determine how to respond in the format.

strict?: Bool

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

verbosity?: String

Constrains the verbosity of the model’s response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

tools?: List[Attributes]

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model’s capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
name?: String

The name of the function to call.

parameters?: Map[JSON]

A JSON schema object describing the parameters of the function.

strict?: Bool

Whether to enforce strict parameter validation. Default true.

type?: String

The type of the function tool. Always function.

defer_loading?: Bool

Whether this function is deferred and loaded via tool search.

description?: String

A description of the function. Used by the model to determine whether or not to call the function.

vector_store_ids?: List[String]

The IDs of the vector stores to search.

filters?: Attributes

A filter to apply.

key?: String

The key to compare against the value.

type?: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value?: String

The value to compare against the attribute key; supports string, number, or boolean types.

filters?: List[Attributes]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

key: String

The key to compare against the value.

type?: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

allowed_domains?: List[String]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

max_num_results?: Int64

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options?: Attributes

Ranking options for search.

ranker?: String

The ranker to use for the file search.

score_threshold?: Float64

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

display_height?: Int64

The height of the computer display.

display_width?: Int64

The width of the computer display.

environment?: String

The type of computer environment to control.

search_context_size?: String

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

user_location?: Attributes

The approximate location of the user.

city?: String

Free text input for the city of the user, e.g. San Francisco.

country?: String

The two-letter ISO country code of the user, e.g. US.

region?: String

Free text input for the region of the user, e.g. California.

timezone?: String

The IANA timezone of the user, e.g. America/Los_Angeles.

type?: String

The type of location approximation. Always approximate.

server_label?: String

A label for this MCP server, used to identify it in tool calls.

allowed_tools?: List[String]

List of allowed tool names or a filter object.

authorization?: String

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: String

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
headers?: Map[String]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: Attributes

Specify which of the MCP server’s tools require approval.

always?: Attributes

A filter object to specify which tools are allowed.

read_only?: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: List[String]

List of allowed tool names.

never?: Attributes

A filter object to specify which tools are allowed.

read_only?: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: List[String]

List of allowed tool names.

server_description?: String

Optional description of the MCP server, used to provide more context.

server_url?: String

The URL for the MCP server. One of server_url or connector_id must be provided.

container?: String

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

action?: String

Whether to generate a new image or edit an existing image. Default: auto.

background?: String

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

input_fidelity?: String

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

input_image_mask?: Attributes

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id?: String

File ID for the mask image.

image_url?: String

Base64-encoded mask image.

model?: String

The image generation model to use. Default: gpt-image-1.

moderation?: String

Moderation level for the generated image. Default: auto.

output_compression?: Int64

Compression level for the output image. Default: 100.

output_format?: String

The output format of the generated image. One of png, webp, or jpeg. Default: png.

partial_images?: Int64

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

quality?: String

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

size?: String

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

format?: Attributes

The input format for the custom tool. Default is unconstrained text.

type?: String

Unconstrained text format. Always text.

definition?: String

The grammar definition.

syntax?: String

The syntax of the grammar definition. One of lark or regex.

tools?: List[Attributes]

The function/custom tools available inside this namespace.

name: String
type?: String
defer_loading?: Bool

Whether this function should be deferred and discovered via tool search.

description?: String
parameters?: JSON
strict?: Bool
format?: Attributes

The input format for the custom tool. Default is unconstrained text.

type?: String

Unconstrained text format. Always text.

definition?: String

The grammar definition.

syntax?: String

The syntax of the grammar definition. One of lark or regex.

execution?: String

Whether tool search is executed by the server or by the client.

search_content_types?: List[String]
computed Expand Collapse
id: String

Unique identifier for this Response.

completed_at: Float64

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

created_at: Float64

Unix timestamp (in seconds) of when this Response was created.

object: String

The object type of this resource - always set to response.

output_text: String

SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs.

status: String

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

error: Attributes

An error object returned when the model fails to generate a Response.

code: String

The error code for the response.

message: String

A human-readable description of the error.

incomplete_details: Attributes

Details about why the response is incomplete.

reason: String

The reason why the response is incomplete.

output: List[Attributes]

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model’s response.
  • Rather than accessing the first item in the output array and assuming it’s an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
id: String

The unique ID of the output message.

content: List[Attributes]

The content of the output message.

annotations: List[Attributes]

The annotations of the text output.

file_id: String

The ID of the file.

filename: String

The filename of the file cited.

index: Int64

The index of the file in the list of files.

type: String

The type of the file citation. Always file_citation.

end_index: Int64

The index of the last character of the URL citation in the message.

start_index: Int64

The index of the first character of the URL citation in the message.

title: String

The title of the web resource.

url: String

The URL of the web resource.

container_id: String

The ID of the container file.

text: String

The text output from the model.

type: String

The type of the output text. Always output_text.

logprobs: List[Attributes]
token: String
bytes: List[Int64]
logprob: Float64
top_logprobs: List[Attributes]
token: String
bytes: List[Int64]
logprob: Float64
refusal: String

The refusal explanation from the model.

role: String

The role of the output message. Always assistant.

status: String

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

type: String

The type of the output message. Always message.

phase: String

Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer). For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend phase on all assistant messages — dropping it can degrade performance. Not used for user messages.

queries: List[String]

The queries used to search for files.

results: List[Attributes]

The results of the file search tool call.

attributes: Dynamic

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

file_id: String

The unique ID of the file.

filename: String

The name of the file.

score: Float64

The relevance score of the file - a value between 0 and 1.

text: String

The text that was retrieved from the file.

arguments: String

A JSON string of the arguments to pass to the function.

call_id: String

The unique ID of the function tool call generated by the model.

name: String

The name of the function to run.

namespace: String

The namespace of the function to run.

output: String

The output from the function call generated by your code. Can be a string or an list of output content.

created_by: String

The identifier of the actor that created the item.

action: Attributes

An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).

query: String

[DEPRECATED] The search query.

type: String

The action type.

queries: List[String]

The search queries.

sources: List[Attributes]

The sources used in the search.

type: String

The type of source. Always url.

url: String

The URL of the source.

url: String

The URL opened by the model.

pattern: String

The pattern or text to search for within the page.

button: String

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

x: Int64

The x-coordinate where the click occurred.

y: Int64

The y-coordinate where the click occurred.

keys: List[String]

The keys being held while clicking.

path: List[Attributes]

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: Int64

The x-coordinate.

y: Int64

The y-coordinate.

scroll_x: Int64

The horizontal scroll distance.

scroll_y: Int64

The vertical scroll distance.

text: String

The text to type.

command: List[String]

The command to run.

env: Map[String]

Environment variables to set for the command.

timeout_ms: Int64

Optional timeout in milliseconds for the command.

user: String

Optional user to run the command as.

working_directory: String

Optional working directory to run the command in.

commands: List[String]
max_output_length: Int64

Optional maximum number of characters to return from each command.

pending_safety_checks: List[Attributes]

The pending safety checks for the computer call.

id: String

The ID of the pending safety check.

code: String

The type of the pending safety check.

message: String

Details about the pending safety check.

actions: List[Attributes]

Flattened batched actions for computer_use. Each action includes an type discriminator and action-specific fields.

button: String

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

type: String

Specifies the event type. For a click action, this property is always click.

x: Int64

The x-coordinate where the click occurred.

y: Int64

The y-coordinate where the click occurred.

keys: List[String]

The keys being held while clicking.

path: List[Attributes]

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: Int64

The x-coordinate.

y: Int64

The y-coordinate.

scroll_x: Int64

The horizontal scroll distance.

scroll_y: Int64

The vertical scroll distance.

text: String

The text to type.

acknowledged_safety_checks: List[Attributes]

The safety checks reported by the API that have been acknowledged by the developer.

id: String

The ID of the pending safety check.

code: String

The type of the pending safety check.

message: String

Details about the pending safety check.

summary: List[Attributes]

Reasoning summary content.

text: String

A summary of the reasoning output from the model so far.

type: String

The type of the object. Always summary_text.

encrypted_content: String

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

execution: String

Whether tool search was executed by the server or by the client.

tools: List[Attributes]

The loaded tool definitions returned by tool search.

name: String

The name of the function to call.

parameters: Map[JSON]

A JSON schema object describing the parameters of the function.

strict: Bool

Whether to enforce strict parameter validation. Default true.

type: String

The type of the function tool. Always function.

defer_loading: Bool

Whether this function is deferred and loaded via tool search.

description: String

A description of the function. Used by the model to determine whether or not to call the function.

vector_store_ids: List[String]

The IDs of the vector stores to search.

filters: Attributes

A filter to apply.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

filters: List[Attributes]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

allowed_domains: List[String]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

max_num_results: Int64

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: Attributes

Ranking options for search.

ranker: String

The ranker to use for the file search.

score_threshold: Float64

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

display_height: Int64

The height of the computer display.

display_width: Int64

The width of the computer display.

environment: String

The type of computer environment to control.

search_context_size: String

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

user_location: Attributes

The approximate location of the user.

city: String

Free text input for the city of the user, e.g. San Francisco.

country: String

The two-letter ISO country code of the user, e.g. US.

region: String

Free text input for the region of the user, e.g. California.

timezone: String

The IANA timezone of the user, e.g. America/Los_Angeles.

type: String

The type of location approximation. Always approximate.

server_label: String

A label for this MCP server, used to identify it in tool calls.

allowed_tools: List[String]

List of allowed tool names or a filter object.

authorization: String

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: String

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
headers: Map[String]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: Attributes

Specify which of the MCP server’s tools require approval.

always: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

never: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

server_description: String

Optional description of the MCP server, used to provide more context.

server_url: String

The URL for the MCP server. One of server_url or connector_id must be provided.

container: String

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

action: String

Whether to generate a new image or edit an existing image. Default: auto.

background: String

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

input_fidelity: String

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

input_image_mask: Attributes

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: String

File ID for the mask image.

image_url: String

Base64-encoded mask image.

model: String

The image generation model to use. Default: gpt-image-1.

moderation: String

Moderation level for the generated image. Default: auto.

output_compression: Int64

Compression level for the output image. Default: 100.

output_format: String

The output format of the generated image. One of png, webp, or jpeg. Default: png.

partial_images: Int64

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

quality: String

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

size: String

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

tools: List[Attributes]

The function/custom tools available inside this namespace.

name: String
type: String
defer_loading: Bool

Whether this function should be deferred and discovered via tool search.

description: String
parameters: JSON
strict: Bool
format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

execution: String

Whether tool search is executed by the server or by the client.

search_content_types: List[String]
input_schema: JSON

The JSON schema describing the tool’s input.

annotations: JSON

Additional annotations about the tool.

result: String

The generated image encoded in base64.

code: String

The code to run, or null if not available.

container_id: String

The ID of the container used to run the code.

outputs: List[Attributes]

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

logs: String

The logs output from the code interpreter.

type: String

The type of the output. Always logs.

url: String

The URL of the image output from the code interpreter.

environment: Attributes

Represents the use of a local environment to perform shell actions.

type: String

The environment type. Always local.

container_id: String
max_output_length: Int64

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

operation: Attributes

One of the create_file, delete_file, or update_file operations applied via apply_patch.

diff: String

Diff to apply.

path: String

Path of the file to create.

type: String

Create a new file with the provided diff.

server_label: String

The label of the MCP server running the tool.

approval_request_id: String

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error: String

The error from the tool call, if any.

approve: Bool

Whether the request was approved.

reason: String

Optional reason for the decision.

input: String

The input for the custom tool call generated by the model.

usage: Attributes

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: Int64

The number of input tokens.

input_tokens_details: Attributes

A detailed breakdown of the input tokens.

cached_tokens: Int64

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: Int64

The number of output tokens.

output_tokens_details: Attributes

A detailed breakdown of the output tokens.

reasoning_tokens: Int64

The number of reasoning tokens.

total_tokens: Int64

The total number of tokens used.

openai_response

resource "openai_response" "example_response" {
  background = true
  context_management = [{
    type = "type"
    compact_threshold = 1000
  }]
  conversation = "string"
  include = ["file_search_call.results"]
  input = "string"
  instructions = "instructions"
  max_output_tokens = 16
  max_tool_calls = 0
  metadata = {
    foo = "string"
  }
  model = "gpt-5.1"
  parallel_tool_calls = true
  previous_response_id = "previous_response_id"
  prompt = {
    id = "id"
    variables = {
      foo = "string"
    }
    version = "version"
  }
  prompt_cache_key = "prompt-cache-key-1234"
  prompt_cache_retention = "in_memory"
  reasoning = {
    effort = "none"
    generate_summary = "auto"
    summary = "auto"
  }
  safety_identifier = "safety-identifier-1234"
  service_tier = "auto"
  store = true
  stream = false
  stream_options = {
    include_obfuscation = true
  }
  temperature = 1
  text = {
    format = {
      type = "text"
    }
    verbosity = "low"
  }
  tool_choice = "none"
  tools = [{
    name = "name"
    parameters = {
      foo = "bar"
    }
    strict = true
    type = "function"
    defer_loading = true
    description = "description"
  }]
  top_logprobs = 0
  top_p = 1
  truncation = "auto"
  user = "user-1234"
}

data openai_response

required Expand Collapse
response_id: String
stream: Bool

If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.

optional Expand Collapse
include_obfuscation?: Bool

When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API.

starting_after?: Int64

The sequence number of the event after which to start streaming.

include?: List[String]

Additional fields to include in the response. See the include parameter for Response creation above for more information.

computed Expand Collapse
id: String
background: Bool

Whether to run the model response in the background. Learn more.

completed_at: Float64

Unix timestamp (in seconds) of when this Response was completed. Only present when the status is completed.

created_at: Float64

Unix timestamp (in seconds) of when this Response was created.

instructions: String

A system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

max_output_tokens: Int64

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls: Int64

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

model: String

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

object: String

The object type of this resource - always set to response.

output_text: String

SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs.

parallel_tool_calls: Bool

Whether to allow the model to run tool calls in parallel.

previous_response_id: String

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.

prompt_cache_key: String

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

prompt_cache_retention: String

The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.

safety_identifier: String

A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

service_tier: String

Specifies the processing type used for serving the request.

  • If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
  • If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
  • If set to ‘flex’ or ‘priority’, then the request will be processed with the corresponding service tier.
  • When not set, the default behavior is ‘auto’.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

status: String

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.

temperature: Float64

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

tool_choice: String

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.

top_logprobs: Int64

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

top_p: Float64

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

truncation: String

The truncation strategy to use for the model response.

  • auto: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
  • disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Deprecateduser: String

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

conversation: Attributes

The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.

id: String

The unique ID of the conversation that this response was associated with.

error: Attributes

An error object returned when the model fails to generate a Response.

code: String

The error code for the response.

message: String

A human-readable description of the error.

incomplete_details: Attributes

Details about why the response is incomplete.

reason: String

The reason why the response is incomplete.

output: List[Attributes]

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model’s response.
  • Rather than accessing the first item in the output array and assuming it’s an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
id: String

The unique ID of the output message.

content: List[Attributes]

The content of the output message.

annotations: List[Attributes]

The annotations of the text output.

file_id: String

The ID of the file.

filename: String

The filename of the file cited.

index: Int64

The index of the file in the list of files.

type: String

The type of the file citation. Always file_citation.

end_index: Int64

The index of the last character of the URL citation in the message.

start_index: Int64

The index of the first character of the URL citation in the message.

title: String

The title of the web resource.

url: String

The URL of the web resource.

container_id: String

The ID of the container file.

text: String

The text output from the model.

type: String

The type of the output text. Always output_text.

logprobs: List[Attributes]
token: String
bytes: List[Int64]
logprob: Float64
top_logprobs: List[Attributes]
token: String
bytes: List[Int64]
logprob: Float64
refusal: String

The refusal explanation from the model.

role: String

The role of the output message. Always assistant.

status: String

The status of the message input. One of in_progress, completed, or incomplete. Populated when input items are returned via API.

type: String

The type of the output message. Always message.

phase: String

Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer). For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend phase on all assistant messages — dropping it can degrade performance. Not used for user messages.

queries: List[String]

The queries used to search for files.

results: List[Attributes]

The results of the file search tool call.

attributes: Dynamic

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

file_id: String

The unique ID of the file.

filename: String

The name of the file.

score: Float64

The relevance score of the file - a value between 0 and 1.

text: String

The text that was retrieved from the file.

arguments: String

A JSON string of the arguments to pass to the function.

call_id: String

The unique ID of the function tool call generated by the model.

name: String

The name of the function to run.

namespace: String

The namespace of the function to run.

output: String

The output from the function call generated by your code. Can be a string or an list of output content.

created_by: String

The identifier of the actor that created the item.

action: Attributes

An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find_in_page).

query: String

[DEPRECATED] The search query.

type: String

The action type.

queries: List[String]

The search queries.

sources: List[Attributes]

The sources used in the search.

type: String

The type of source. Always url.

url: String

The URL of the source.

url: String

The URL opened by the model.

pattern: String

The pattern or text to search for within the page.

button: String

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

x: Int64

The x-coordinate where the click occurred.

y: Int64

The y-coordinate where the click occurred.

keys: List[String]

The keys being held while clicking.

path: List[Attributes]

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: Int64

The x-coordinate.

y: Int64

The y-coordinate.

scroll_x: Int64

The horizontal scroll distance.

scroll_y: Int64

The vertical scroll distance.

text: String

The text to type.

command: List[String]

The command to run.

env: Map[String]

Environment variables to set for the command.

timeout_ms: Int64

Optional timeout in milliseconds for the command.

user: String

Optional user to run the command as.

working_directory: String

Optional working directory to run the command in.

commands: List[String]
max_output_length: Int64

Optional maximum number of characters to return from each command.

pending_safety_checks: List[Attributes]

The pending safety checks for the computer call.

id: String

The ID of the pending safety check.

code: String

The type of the pending safety check.

message: String

Details about the pending safety check.

actions: List[Attributes]

Flattened batched actions for computer_use. Each action includes an type discriminator and action-specific fields.

button: String

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

type: String

Specifies the event type. For a click action, this property is always click.

x: Int64

The x-coordinate where the click occurred.

y: Int64

The y-coordinate where the click occurred.

keys: List[String]

The keys being held while clicking.

path: List[Attributes]

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: Int64

The x-coordinate.

y: Int64

The y-coordinate.

scroll_x: Int64

The horizontal scroll distance.

scroll_y: Int64

The vertical scroll distance.

text: String

The text to type.

acknowledged_safety_checks: List[Attributes]

The safety checks reported by the API that have been acknowledged by the developer.

id: String

The ID of the pending safety check.

code: String

The type of the pending safety check.

message: String

Details about the pending safety check.

summary: List[Attributes]

Reasoning summary content.

text: String

A summary of the reasoning output from the model so far.

type: String

The type of the object. Always summary_text.

encrypted_content: String

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

execution: String

Whether tool search was executed by the server or by the client.

tools: List[Attributes]

The loaded tool definitions returned by tool search.

name: String

The name of the function to call.

parameters: Map[JSON]

A JSON schema object describing the parameters of the function.

strict: Bool

Whether to enforce strict parameter validation. Default true.

type: String

The type of the function tool. Always function.

defer_loading: Bool

Whether this function is deferred and loaded via tool search.

description: String

A description of the function. Used by the model to determine whether or not to call the function.

vector_store_ids: List[String]

The IDs of the vector stores to search.

filters: Attributes

A filter to apply.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

filters: List[Attributes]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

allowed_domains: List[String]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

max_num_results: Int64

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: Attributes

Ranking options for search.

ranker: String

The ranker to use for the file search.

score_threshold: Float64

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

display_height: Int64

The height of the computer display.

display_width: Int64

The width of the computer display.

environment: String

The type of computer environment to control.

search_context_size: String

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

user_location: Attributes

The approximate location of the user.

city: String

Free text input for the city of the user, e.g. San Francisco.

country: String

The two-letter ISO country code of the user, e.g. US.

region: String

Free text input for the region of the user, e.g. California.

timezone: String

The IANA timezone of the user, e.g. America/Los_Angeles.

type: String

The type of location approximation. Always approximate.

server_label: String

A label for this MCP server, used to identify it in tool calls.

allowed_tools: List[String]

List of allowed tool names or a filter object.

authorization: String

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: String

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
headers: Map[String]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: Attributes

Specify which of the MCP server’s tools require approval.

always: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

never: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

server_description: String

Optional description of the MCP server, used to provide more context.

server_url: String

The URL for the MCP server. One of server_url or connector_id must be provided.

container: String

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

action: String

Whether to generate a new image or edit an existing image. Default: auto.

background: String

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

input_fidelity: String

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

input_image_mask: Attributes

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: String

File ID for the mask image.

image_url: String

Base64-encoded mask image.

model: String

The image generation model to use. Default: gpt-image-1.

moderation: String

Moderation level for the generated image. Default: auto.

output_compression: Int64

Compression level for the output image. Default: 100.

output_format: String

The output format of the generated image. One of png, webp, or jpeg. Default: png.

partial_images: Int64

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

quality: String

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

size: String

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

tools: List[Attributes]

The function/custom tools available inside this namespace.

name: String
type: String
defer_loading: Bool

Whether this function should be deferred and discovered via tool search.

description: String
parameters: JSON
strict: Bool
format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

execution: String

Whether tool search is executed by the server or by the client.

search_content_types: List[String]
input_schema: JSON

The JSON schema describing the tool’s input.

annotations: JSON

Additional annotations about the tool.

result: String

The generated image encoded in base64.

code: String

The code to run, or null if not available.

container_id: String

The ID of the container used to run the code.

outputs: List[Attributes]

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

logs: String

The logs output from the code interpreter.

type: String

The type of the output. Always logs.

url: String

The URL of the image output from the code interpreter.

environment: Attributes

Represents the use of a local environment to perform shell actions.

type: String

The environment type. Always local.

container_id: String
max_output_length: Int64

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

operation: Attributes

One of the create_file, delete_file, or update_file operations applied via apply_patch.

diff: String

Diff to apply.

path: String

Path of the file to create.

type: String

Create a new file with the provided diff.

server_label: String

The label of the MCP server running the tool.

approval_request_id: String

Unique identifier for the MCP tool call approval request. Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.

error: String

The error from the tool call, if any.

approve: Bool

Whether the request was approved.

reason: String

Optional reason for the decision.

input: String

The input for the custom tool call generated by the model.

prompt: Attributes

Reference to a prompt template and its variables. Learn more.

id: String

The unique identifier of the prompt template to use.

variables: Map[String]

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

version: String

Optional version of the prompt template.

reasoning: Attributes

gpt-5 and o-series models only

Configuration options for reasoning models.

effort: String

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Deprecatedgenerate_summary: String

Deprecated: use summary instead.

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.

summary: String

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

text: Attributes

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

format: Attributes

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

type: String

The type of response format being defined. Always text.

name: String

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Map[JSON]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

description: String

A description of what the response format is for, used by the model to determine how to respond in the format.

strict: Bool

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

verbosity: String

Constrains the verbosity of the model’s response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.

tools: List[Attributes]

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

We support the following categories of tools:

  • Built-in tools: Tools that are provided by OpenAI that extend the model’s capabilities, like web search or file search. Learn more about built-in tools.
  • MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
name: String

The name of the function to call.

parameters: Map[JSON]

A JSON schema object describing the parameters of the function.

strict: Bool

Whether to enforce strict parameter validation. Default true.

type: String

The type of the function tool. Always function.

defer_loading: Bool

Whether this function is deferred and loaded via tool search.

description: String

A description of the function. Used by the model to determine whether or not to call the function.

vector_store_ids: List[String]

The IDs of the vector stores to search.

filters: Attributes

A filter to apply.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

filters: List[Attributes]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

allowed_domains: List[String]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

max_num_results: Int64

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: Attributes

Ranking options for search.

ranker: String

The ranker to use for the file search.

score_threshold: Float64

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

display_height: Int64

The height of the computer display.

display_width: Int64

The width of the computer display.

environment: String

The type of computer environment to control.

search_context_size: String

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

user_location: Attributes

The approximate location of the user.

city: String

Free text input for the city of the user, e.g. San Francisco.

country: String

The two-letter ISO country code of the user, e.g. US.

region: String

Free text input for the region of the user, e.g. California.

timezone: String

The IANA timezone of the user, e.g. America/Los_Angeles.

type: String

The type of location approximation. Always approximate.

server_label: String

A label for this MCP server, used to identify it in tool calls.

allowed_tools: List[String]

List of allowed tool names or a filter object.

authorization: String

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: String

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
headers: Map[String]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: Attributes

Specify which of the MCP server’s tools require approval.

always: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

never: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

server_description: String

Optional description of the MCP server, used to provide more context.

server_url: String

The URL for the MCP server. One of server_url or connector_id must be provided.

container: String

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

action: String

Whether to generate a new image or edit an existing image. Default: auto.

background: String

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

input_fidelity: String

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

input_image_mask: Attributes

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: String

File ID for the mask image.

image_url: String

Base64-encoded mask image.

model: String

The image generation model to use. Default: gpt-image-1.

moderation: String

Moderation level for the generated image. Default: auto.

output_compression: Int64

Compression level for the output image. Default: 100.

output_format: String

The output format of the generated image. One of png, webp, or jpeg. Default: png.

partial_images: Int64

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

quality: String

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

size: String

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

tools: List[Attributes]

The function/custom tools available inside this namespace.

name: String
type: String
defer_loading: Bool

Whether this function should be deferred and discovered via tool search.

description: String
parameters: JSON
strict: Bool
format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

execution: String

Whether tool search is executed by the server or by the client.

search_content_types: List[String]
usage: Attributes

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.

input_tokens: Int64

The number of input tokens.

input_tokens_details: Attributes

A detailed breakdown of the input tokens.

cached_tokens: Int64

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: Int64

The number of output tokens.

output_tokens_details: Attributes

A detailed breakdown of the output tokens.

reasoning_tokens: Int64

The number of reasoning tokens.

total_tokens: Int64

The total number of tokens used.

openai_response

data "openai_response" "example_response" {
  response_id = "resp_677efb5139a88190b512bc3fef8e535d"
  include = ["file_search_call.results"]
  include_obfuscation = true
  starting_after = 0
  stream = false
}

ResponsesInput Items

data openai_response_input_items

required Expand Collapse
response_id: String
optional Expand Collapse
order?: String

The order to return the input items in. Default is desc.

  • asc: Return the input items in ascending order.
  • desc: Return the input items in descending order.
include?: List[String]

Additional fields to include in the response. See the include parameter for Response creation above for more information.

max_items?: Int64

Max items to fetch, default: 1000

computed Expand Collapse
items: List[Attributes]

The items returned by the data source

id: String

The unique ID of the message input.

content: List[Attributes]

A list of one or many input items to the model, containing different content types.

text: String

The text input to the model.

type: String

The type of the input item. Always input_text.

detail: String

The detail level of the image to be sent to the model. One of high, low, auto, or original. Defaults to auto.

file_id: String

The ID of the file to be sent to the model.

image_url: String

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

file_data: String

The content of the file to be sent to the model.

file_url: String

The URL of the file to be sent to the model.

filename: String

The name of the file to be sent to the model.

annotations: List[Attributes]

The annotations of the text output.

file_id: String

The ID of the file.

filename: String

The filename of the file cited.

index: Int64

The index of the file in the list of files.

type: String

The type of the file citation. Always file_citation.

end_index: Int64

The index of the last character of the URL citation in the message.

start_index: Int64

The index of the first character of the URL citation in the message.

title: String

The title of the web resource.

url: String

The URL of the web resource.

container_id: String

The ID of the container file.

logprobs: List[Attributes]
token: String
bytes: List[Int64]
logprob: Float64
top_logprobs: List[Attributes]
token: String
bytes: List[Int64]
logprob: Float64
refusal: String

The refusal explanation from the model.

role: String

The role of the message input. One of user, system, or developer.

type: String

The type of the message input. Always set to message.

status: String

The status of item. One of in_progress, completed, or incomplete. Populated when items are returned via API.

phase: String

Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer). For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend phase on all assistant messages — dropping it can degrade performance. Not used for user messages.

queries: List[String]

The queries used to search for files.

results: List[Attributes]

The results of the file search tool call.

attributes: Dynamic

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.

file_id: String

The unique ID of the file.

filename: String

The name of the file.

score: Float64

The relevance score of the file - a value between 0 and 1.

text: String

The text that was retrieved from the file.

call_id: String

An identifier used when responding to the tool call with output.

pending_safety_checks: List[Attributes]

The pending safety checks for the computer call.

id: String

The ID of the pending safety check.

code: String

The type of the pending safety check.

message: String

Details about the pending safety check.

action: Attributes

A click action.

button: String

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

type: String

Specifies the event type. For a click action, this property is always click.

x: Int64

The x-coordinate where the click occurred.

y: Int64

The y-coordinate where the click occurred.

keys: List[String]

The keys being held while clicking.

path: List[Attributes]

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: Int64

The x-coordinate.

y: Int64

The y-coordinate.

scroll_x: Int64

The horizontal scroll distance.

scroll_y: Int64

The vertical scroll distance.

text: String

The text to type.

query: String

[DEPRECATED] The search query.

queries: List[String]

The search queries.

sources: List[Attributes]

The sources used in the search.

type: String

The type of source. Always url.

url: String

The URL of the source.

url: String

The URL opened by the model.

pattern: String

The pattern or text to search for within the page.

command: List[String]

The command to run.

env: Map[String]

Environment variables to set for the command.

timeout_ms: Int64

Optional timeout in milliseconds for the command.

user: String

Optional user to run the command as.

working_directory: String

Optional working directory to run the command in.

commands: List[String]
max_output_length: Int64

Optional maximum number of characters to return from each command.

actions: List[Attributes]

Flattened batched actions for computer_use. Each action includes an type discriminator and action-specific fields.

button: String

Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.

type: String

Specifies the event type. For a click action, this property is always click.

x: Int64

The x-coordinate where the click occurred.

y: Int64

The y-coordinate where the click occurred.

keys: List[String]

The keys being held while clicking.

path: List[Attributes]

An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg

[
  { x: 100, y: 200 },
  { x: 200, y: 300 }
]
x: Int64

The x-coordinate.

y: Int64

The y-coordinate.

scroll_x: Int64

The horizontal scroll distance.

scroll_y: Int64

The vertical scroll distance.

text: String

The text to type.

output: Attributes

A computer screenshot image used with the computer use tool.

type: String

Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.

file_id: String

The identifier of an uploaded file that contains the screenshot.

image_url: String

The URL of the screenshot image.

acknowledged_safety_checks: List[Attributes]

The safety checks reported by the API that have been acknowledged by the developer.

id: String

The ID of the pending safety check.

code: String

The type of the pending safety check.

message: String

Details about the pending safety check.

created_by: String

The identifier of the actor that created the item.

arguments: String

A JSON string of the arguments to pass to the function.

name: String

The name of the function to run.

namespace: String

The namespace of the function to run.

execution: String

Whether tool search was executed by the server or by the client.

tools: List[Attributes]

The loaded tool definitions returned by tool search.

name: String

The name of the function to call.

parameters: Map[JSON]

A JSON schema object describing the parameters of the function.

strict: Bool

Whether to enforce strict parameter validation. Default true.

type: String

The type of the function tool. Always function.

defer_loading: Bool

Whether this function is deferred and loaded via tool search.

description: String

A description of the function. Used by the model to determine whether or not to call the function.

vector_store_ids: List[String]

The IDs of the vector stores to search.

filters: Attributes

A filter to apply.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

filters: List[Attributes]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

key: String

The key to compare against the value.

type: String

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
value: String

The value to compare against the attribute key; supports string, number, or boolean types.

allowed_domains: List[String]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

max_num_results: Int64

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: Attributes

Ranking options for search.

ranker: String

The ranker to use for the file search.

score_threshold: Float64

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

display_height: Int64

The height of the computer display.

display_width: Int64

The width of the computer display.

environment: String

The type of computer environment to control.

search_context_size: String

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

user_location: Attributes

The approximate location of the user.

city: String

Free text input for the city of the user, e.g. San Francisco.

country: String

The two-letter ISO country code of the user, e.g. US.

region: String

Free text input for the region of the user, e.g. California.

timezone: String

The IANA timezone of the user, e.g. America/Los_Angeles.

type: String

The type of location approximation. Always approximate.

server_label: String

A label for this MCP server, used to identify it in tool calls.

allowed_tools: List[String]

List of allowed tool names or a filter object.

authorization: String

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: String

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
headers: Map[String]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: Attributes

Specify which of the MCP server’s tools require approval.

always: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

never: Attributes

A filter object to specify which tools are allowed.

read_only: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: List[String]

List of allowed tool names.

server_description: String

Optional description of the MCP server, used to provide more context.

server_url: String

The URL for the MCP server. One of server_url or connector_id must be provided.

container: String

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

action: String

Whether to generate a new image or edit an existing image. Default: auto.

background: String

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

input_fidelity: String

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

input_image_mask: Attributes

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: String

File ID for the mask image.

image_url: String

Base64-encoded mask image.

model: String

The image generation model to use. Default: gpt-image-1.

moderation: String

Moderation level for the generated image. Default: auto.

output_compression: Int64

Compression level for the output image. Default: 100.

output_format: String

The output format of the generated image. One of png, webp, or jpeg. Default: png.

partial_images: Int64

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

quality: String

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

size: String

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

tools: List[Attributes]

The function/custom tools available inside this namespace.

name: String
type: String
defer_loading: Bool

Whether this function should be deferred and discovered via tool search.

description: String
parameters: JSON
strict: Bool
format: Attributes

The input format for the custom tool. Default is unconstrained text.

type: String

Unconstrained text format. Always text.

definition: String

The grammar definition.

syntax: String

The syntax of the grammar definition. One of lark or regex.

execution: String

Whether tool search is executed by the server or by the client.

search_content_types: List[String]
input_schema: JSON

The JSON schema describing the tool’s input.

annotations: JSON

Additional annotations about the tool.

summary: List[Attributes]

Reasoning summary content.

text: String

A summary of the reasoning output from the model so far.

type: String

The type of the object. Always summary_text.

encrypted_content: String

The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter.

result: String

The generated image encoded in base64.

code: String

The code to run, or null if not available.

container_id: String

The ID of the container used to run the code.

outputs: List[Attributes]

The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.

logs: String

The logs output from the code interpreter.

type: String

The type of the output. Always logs.

url: String

The URL of the image output from the code interpreter.

environment: Attributes

Represents the use of a local environment to perform shell actions.

type: String

The environment type. Always local.

container_id: String
max_output_length: Int64

The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.

operation: Attributes

One of the create_file, delete_file, or update_file operations applied via apply_patch.

diff: String

Diff to apply.

path: String

Path of the file to create.

type: String

Create a new file with the provided diff.

server_label: String

The label of the MCP server.

error: String

Error message if the server could not list tools.

approval_request_id: String

The ID of the approval request being answered.

approve: Bool

Whether the request was approved.

reason: String

Optional reason for the decision.

input: String

The input for the custom tool call generated by the model.

openai_response_input_items

data "openai_response_input_items" "example_response_input_items" {
  response_id = "response_id"
  include = ["file_search_call.results"]
  order = "asc"
}