Chat
ChatCompletions
Given a list of messages comprising a conversation, the model will return a response.
resource openai_chat_completion
required
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
optional
Deprecated in favor of tool_choice.
Controls which (if any) function is called by the model.
none means the model will not call a function and instead generates a
message.
auto means the model can pick between generating a message or calling a
function.
Specifying a particular function via {"name": "my_function"} forces the
model to call that function.
none is the default when no functions are present. auto is the default
if functions are present.
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
This value is now deprecated in favor of max_completion_tokens, and is
not compatible with o-series models.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
This feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information, along with the streaming responses guide for more information on how to handle the streaming events.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
An integer between 0 and 20 specifying the number of most likely tokens to
return at each token position, each with an associated log probability.
logprobs must be set to true if this parameter is used.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
Output types that you would like the model to generate. Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to
generate audio. To request that this model generate
both text and audio responses, you can use:
["text", "audio"]
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
Whether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the
content of message.
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
Whether to enable parallel function calling during tool use.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
Specifies the processing type used for serving the request.
- If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
- If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
- If set to ‘flex’ or ‘priority’, then the request will be processed with the corresponding service tier.
- When not set, the default behavior is ‘auto’.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
Not supported with latest reasoning models o3 and o4-mini.
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Whether or not to store the output of this chat completion request for use in our model distillation or evals products.
Supports text and image inputs. Note: image inputs over 8MB will be dropped.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Constrains the verbosity of the model’s response. Lower values will result in
more concise responses, while higher values will result in more verbose responses.
Currently supported values are low, medium, and high.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
openai_chat_completion
resource "openai_chat_completion" "example_chat_completion" {
messages = [{
content = "string"
role = "developer"
name = "name"
}]
model = "gpt-5.4"
audio = {
format = "wav"
voice = "string"
}
frequency_penalty = -2
function_call = "none"
functions = [{
name = "name"
description = "description"
parameters = {
foo = "bar"
}
}]
logit_bias = {
foo = 0
}
logprobs = true
max_completion_tokens = 0
max_tokens = 0
metadata = {
foo = "string"
}
modalities = ["text"]
n = 1
parallel_tool_calls = true
prediction = {
content = "string"
type = "content"
}
presence_penalty = -2
prompt_cache_key = "prompt-cache-key-1234"
prompt_cache_retention = "in_memory"
reasoning_effort = "none"
response_format = {
type = "text"
}
safety_identifier = "safety-identifier-1234"
seed = -9007199254740991
service_tier = "auto"
stop = <<EOT
EOT
store = true
stream = false
stream_options = {
include_obfuscation = true
include_usage = true
}
temperature = 1
tool_choice = "none"
tools = [{
function = {
name = "name"
description = "description"
parameters = {
foo = "bar"
}
strict = true
}
type = "function"
}]
top_logprobs = 0
top_p = 1
user = "user-1234"
verbosity = "low"
web_search_options = {
search_context_size = "low"
user_location = {
approximate = {
city = "city"
country = "country"
region = "region"
timezone = "timezone"
}
type = "approximate"
}
}
}
data openai_chat_completion
computed
Specifies the processing type used for serving the request.
- If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
- If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
- If set to ‘flex’ or ‘priority’, then the request will be processed with the corresponding service tier.
- When not set, the default behavior is ‘auto’.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
openai_chat_completion
data "openai_chat_completion" "example_chat_completion" {
completion_id = "completion_id"
}
data openai_chat_completions
openai_chat_completions
data "openai_chat_completions" "example_chat_completions" {
metadata = {
foo = "string"
}
model = "model"
}
ChatCompletionsMessages
Given a list of messages comprising a conversation, the model will return a response.
data openai_chat_completion_messages
optional
openai_chat_completion_messages
data "openai_chat_completion_messages" "example_chat_completion_messages" {
completion_id = "completion_id"
}