Create chat completion
Starting a new project? We recommend trying Responses to take advantage of the latest OpenAI platform features. Compare Chat Completions with Responses.
Creates a model response for the given chat conversation. Learn more in the text generation, vision, and audio guides.
Parameter support can differ depending on the model used to generate the response, particularly for newer reasoning models. Parameters that are only supported for reasoning models are noted below. For the current state of unsupported parameters in reasoning models, refer to the reasoning guide.
Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed.
ParametersExpand Collapse
ChatCompletionCreateParams params
List<ChatCompletionMessageParam> messages
class ChatCompletionDeveloperMessageParam:Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, developer messages
replace the previous system messages.
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, developer messages
replace the previous system messages.
class ChatCompletionSystemMessageParam:Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, use developer messages
for this purpose instead.
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, use developer messages
for this purpose instead.
class ChatCompletionUserMessageParam:Messages sent by an end user, containing prompts or additional context
information.
Messages sent by an end user, containing prompts or additional context information.
Content contentThe contents of the user message.
The contents of the user message.
class ChatCompletionContentPartText:Learn about text inputs.
Learn about text inputs.
class ChatCompletionContentPartImage:Learn about image inputs.
Learn about image inputs.
ImageUrl imageUrl
Optional<Detail> detailSpecifies the detail level of the image. Learn more in the Vision guide.
Specifies the detail level of the image. Learn more in the Vision guide.
class ChatCompletionContentPartInputAudio:Learn about audio inputs.
Learn about audio inputs.
class ChatCompletionAssistantMessageParam:Messages sent by the model in response to user messages.
Messages sent by the model in response to user messages.
The role of the messages author, in this case assistant.
Optional<Audio> audioData about a previous audio response from the model.
Learn more.
Data about a previous audio response from the model. Learn more.
Optional<Content> contentThe contents of the assistant message. Required unless tool_calls or function_call is specified.
The contents of the assistant message. Required unless tool_calls or function_call is specified.
List<ChatCompletionRequestAssistantMessageContentPart>
class ChatCompletionContentPartText:Learn about text inputs.
Learn about text inputs.
DeprecatedOptional<FunctionCall> functionCallDeprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The tool calls generated by the model, such as function calls.
The tool calls generated by the model, such as function calls.
class ChatCompletionMessageFunctionToolCall:A call to a function tool created by the model.
A call to a function tool created by the model.
Function functionThe function that the model called.
The function that the model called.
ChatModel modelModel ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
Parameters for audio output. Required when audio output is requested with
modalities: ["audio"]. Learn more.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
DeprecatedOptional<FunctionCall> functionCallDeprecated in favor of tool_choice.
Controls which (if any) function is called by the model.
none means the model will not call a function and instead generates a
message.
auto means the model can pick between generating a message or calling a
function.
Specifying a particular function via {"name": "my_function"} forces the
model to call that function.
none is the default when no functions are present. auto is the default
if functions are present.
Deprecated in favor of tool_choice.
Controls which (if any) function is called by the model.
none means the model will not call a function and instead generates a
message.
auto means the model can pick between generating a message or calling a
function.
Specifying a particular function via {"name": "my_function"} forces the
model to call that function.
none is the default when no functions are present. auto is the default
if functions are present.
DeprecatedOptional<List<Function>> functionsDeprecated in favor of tools.
A list of functions the model may generate JSON inputs for.
Deprecated in favor of tools.
A list of functions the model may generate JSON inputs for.
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the function does, used by the model to choose when and how to call the function.
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
Omitting parameters defines a function with an empty parameter list.
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
Whether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the
content of message.
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
This value is now deprecated in favor of max_completion_tokens, and is
not compatible with o-series models.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Optional<List<Modality>> modalitiesOutput types that you would like the model to generate.
Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to
generate audio. To request that this model generate
both text and audio responses, you can use:
["text", "audio"]
Output types that you would like the model to generate. Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to
generate audio. To request that this model generate
both text and audio responses, you can use:
["text", "audio"]
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
Whether to enable parallel function calling during tool use.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
Optional<PromptCacheRetention> promptCacheRetentionThe retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
Optional<ResponseFormat> responseFormatAn object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
class ResponseFormatJsonSchema:JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
JsonSchema jsonSchemaStructured Outputs configuration options, including a JSON Schema.
Structured Outputs configuration options, including a JSON Schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the response format is for, used by the model to determine how to respond in the format.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
This feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
Optional<ServiceTier> serviceTierSpecifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
Optional<Stop> stopNot supported with latest reasoning models o3 and o4-mini.
Up to 4 sequences where the API will stop generating further tokens. The
returned text will not contain the stop sequence.
Not supported with latest reasoning models o3 and o4-mini.
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Whether or not to store the output of this chat completion request for use in our model distillation or evals products.
Supports text and image inputs. Note: image inputs over 8MB will be dropped.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
A list of tools the model may call. You can provide either
custom tools or
function tools.
A list of tools the model may call. You can provide either custom tools or function tools.
class ChatCompletionFunctionTool:A function tool that can be used to generate a response.
A function tool that can be used to generate a response.
FunctionDefinition function
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the function does, used by the model to choose when and how to call the function.
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
Omitting parameters defines a function with an empty parameter list.
Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.
An integer between 0 and 20 specifying the number of most likely tokens to
return at each token position, each with an associated log probability.
logprobs must be set to true if this parameter is used.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
Optional<Verbosity> verbosityConstrains the verbosity of the model's response. Lower values will result in
more concise responses, while higher values will result in more verbose responses.
Currently supported values are low, medium, and high.
Constrains the verbosity of the model's response. Lower values will result in
more concise responses, while higher values will result in more verbose responses.
Currently supported values are low, medium, and high.
Optional<WebSearchOptions> webSearchOptionsThis tool searches the web for relevant results to use in a response.
Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
Optional<SearchContextSize> searchContextSizeHigh level guidance for the amount of context window space to use for the
search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the
search. One of low, medium, or high. medium is the default.
Optional<UserLocation> userLocationApproximate location parameters for the search.
Approximate location parameters for the search.
Approximate approximateApproximate location parameters for the search.
Approximate location parameters for the search.
The two-letter
ISO country code of the user,
e.g. US.
The IANA timezone
of the user, e.g. America/Los_Angeles.
ReturnsExpand Collapse
class ChatCompletion:Represents a chat completion response returned by model, based on the provided input.
Represents a chat completion response returned by model, based on the provided input.
List<Choice> choicesA list of chat completion choices. Can be more than one if n is greater than 1.
A list of chat completion choices. Can be more than one if n is greater than 1.
FinishReason finishReasonThe reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence,
length if the maximum number of tokens specified in the request was reached,
content_filter if content was omitted due to a flag from our content filters,
tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence,
length if the maximum number of tokens specified in the request was reached,
content_filter if content was omitted due to a flag from our content filters,
tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
Optional<Logprobs> logprobsLog probability information for the choice.
Log probability information for the choice.
A list of message content tokens with log probability information.
A list of message content tokens with log probability information.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
List<TopLogprob> topLogprobsList of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
A list of message refusal tokens with log probability information.
A list of message refusal tokens with log probability information.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
List<TopLogprob> topLogprobsList of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
ChatCompletionMessage messageA chat completion message generated by the model.
A chat completion message generated by the model.
Optional<List<Annotation>> annotationsAnnotations for the message, when applicable, as when using the
web search tool.
Annotations for the message, when applicable, as when using the web search tool.
If the audio output modality is requested, this object contains data
about the audio response from the model. Learn more.
If the audio output modality is requested, this object contains data about the audio response from the model. Learn more.
DeprecatedOptional<FunctionCall> functionCallDeprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
The tool calls generated by the model, such as function calls.
The tool calls generated by the model, such as function calls.
class ChatCompletionMessageFunctionToolCall:A call to a function tool created by the model.
A call to a function tool created by the model.
Function functionThe function that the model called.
The function that the model called.
The object type, which is always chat.completion.
Optional<ServiceTier> serviceTierSpecifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
Usage statistics for the completion request.
Usage statistics for the completion request.
Optional<CompletionTokensDetails> completionTokensDetailsBreakdown of tokens used in a completion.
Breakdown of tokens used in a completion.
When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion.
When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits.
class ChatCompletionChunk:Represents a streamed chunk of a chat completion response returned
by the model, based on the provided input.
Learn more.
Represents a streamed chunk of a chat completion response returned by the model, based on the provided input. Learn more.
List<Choice> choicesA list of chat completion choices. Can contain more than one elements if n is greater than 1. Can also be empty for the
last chunk if you set stream_options: {"include_usage": true}.
A list of chat completion choices. Can contain more than one elements if n is greater than 1. Can also be empty for the
last chunk if you set stream_options: {"include_usage": true}.
Delta deltaA chat completion delta generated by streamed model responses.
A chat completion delta generated by streamed model responses.
DeprecatedOptional<FunctionCall> functionCallDeprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
Optional<List<ToolCall>> toolCalls
Optional<Function> function
Optional<FinishReason> finishReasonThe reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence,
length if the maximum number of tokens specified in the request was reached,
content_filter if content was omitted due to a flag from our content filters,
tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence,
length if the maximum number of tokens specified in the request was reached,
content_filter if content was omitted due to a flag from our content filters,
tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
Optional<Logprobs> logprobsLog probability information for the choice.
Log probability information for the choice.
A list of message content tokens with log probability information.
A list of message content tokens with log probability information.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
List<TopLogprob> topLogprobsList of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
A list of message refusal tokens with log probability information.
A list of message refusal tokens with log probability information.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
List<TopLogprob> topLogprobsList of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
The object type, which is always chat.completion.chunk.
Optional<ServiceTier> serviceTierSpecifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
An optional field that will only be present when you set
stream_options: {"include_usage": true} in your request. When present, it
contains a null value except for the last chunk which contains the
token usage statistics for the entire request.
NOTE: If the stream is interrupted or cancelled, you may not
receive the final usage chunk which contains the total token usage for
the request.
An optional field that will only be present when you set
stream_options: {"include_usage": true} in your request. When present, it
contains a null value except for the last chunk which contains the
token usage statistics for the entire request.
NOTE: If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.
Optional<CompletionTokensDetails> completionTokensDetailsBreakdown of tokens used in a completion.
Breakdown of tokens used in a completion.
When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion.
When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits.
Create chat completion
package com.openai.example;
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.openai.models.ChatModel;
import com.openai.models.chat.completions.ChatCompletion;
import com.openai.models.chat.completions.ChatCompletionCreateParams;
public final class Main {
private Main() {}
public static void main(String[] args) {
OpenAIClient client = OpenAIOkHttpClient.fromEnv();
ChatCompletionCreateParams params = ChatCompletionCreateParams.builder()
.addDeveloperMessage("string")
.model(ChatModel.GPT_5_4)
.build();
ChatCompletion chatCompletion = client.chat().completions().create(params);
}
}{
"id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
"object": "chat.completion",
"created": 1741569952,
"model": "gpt-5.4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 19,
"completion_tokens": 10,
"total_tokens": 29,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default"
}
Returns Examples
{
"id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
"object": "chat.completion",
"created": 1741569952,
"model": "gpt-5.4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 19,
"completion_tokens": 10,
"total_tokens": 29,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default"
}