Responses
resource openai_response
optional
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
A system (or developer) message inserted into the model’s context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
Specify additional output data to include in the model response. Currently supported values are:
web_search_call.action.sources: Include the sources of the web search tool call.code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.computer_call_output.output.image_url: Include image urls from the computer call output.file_search_call.results: Include the search results of the file search tool call.message.input_image.image_url: Include image urls from the input message.message.output_text.logprobs: Include logprobs with assistant messages.reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when thestoreparameter is set tofalse, or when an organization is enrolled in the zero data retention program).
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Whether to run the model response in the background. Learn more.
Specifies the processing type used for serving the request.
- If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
- If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
- If set to ‘flex’ or ‘priority’, then the request will be processed with the corresponding service tier.
- When not set, the default behavior is ‘auto’.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
computed
Unix timestamp (in seconds) of when this Response was completed.
Only present when the status is completed.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
openai_response
resource "openai_response" "example_response" {
background = true
context_management = [{
type = "type"
compact_threshold = 1000
}]
conversation = "string"
include = ["file_search_call.results"]
input = "string"
instructions = "instructions"
max_output_tokens = 16
max_tool_calls = 0
metadata = {
foo = "string"
}
model = "gpt-5.1"
parallel_tool_calls = true
previous_response_id = "previous_response_id"
prompt = {
id = "id"
variables = {
foo = "string"
}
version = "version"
}
prompt_cache_key = "prompt-cache-key-1234"
prompt_cache_retention = "in_memory"
reasoning = {
effort = "none"
generate_summary = "auto"
summary = "auto"
}
safety_identifier = "safety-identifier-1234"
service_tier = "auto"
store = true
stream = false
stream_options = {
include_obfuscation = true
}
temperature = 1
text = {
format = {
type = "text"
}
verbosity = "low"
}
tool_choice = "none"
tools = [{
name = "name"
parameters = {
foo = "bar"
}
strict = true
type = "function"
defer_loading = true
description = "description"
}]
top_logprobs = 0
top_p = 1
truncation = "auto"
user = "user-1234"
}
data openai_response
required
If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information.
optional
When true, stream obfuscation will be enabled. Stream obfuscation adds
random characters to an obfuscation field on streaming delta events
to normalize payload sizes as a mitigation to certain side-channel
attacks. These obfuscation fields are included by default, but add a
small amount of overhead to the data stream. You can set
include_obfuscation to false to optimize for bandwidth if you trust
the network links between your application and the OpenAI API.
computed
Whether to run the model response in the background. Learn more.
Unix timestamp (in seconds) of when this Response was completed.
Only present when the status is completed.
A system (or developer) message inserted into the model’s context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
Specifies the processing type used for serving the request.
- If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
- If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
- If set to ‘flex’ or ‘priority’, then the request will be processed with the corresponding service tier.
- When not set, the default behavior is ‘auto’.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
openai_response
data "openai_response" "example_response" {
response_id = "resp_677efb5139a88190b512bc3fef8e535d"
include = ["file_search_call.results"]
include_obfuscation = true
starting_after = 0
stream = false
}
ResponsesInput Items
data openai_response_input_items
optional
The order to return the input items in. Default is desc.
asc: Return the input items in ascending order.desc: Return the input items in descending order.
openai_response_input_items
data "openai_response_input_items" "example_response_input_items" {
response_id = "response_id"
include = ["file_search_call.results"]
order = "asc"
}