Realtime
ModelsExpand Collapse
AudioTranscription = object { language, model, prompt }
The language of the input audio. Supplying the input language in
ISO-639-1 (e.g. en) format
will improve accuracy and latency.
model: optional string or "whisper-1" or "gpt-4o-mini-transcribe" or "gpt-4o-mini-transcribe-2025-12-15" or 2 moreThe model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.
The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.
UnionMember1 = "whisper-1" or "gpt-4o-mini-transcribe" or "gpt-4o-mini-transcribe-2025-12-15" or 2 moreThe model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.
The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.
An optional text to guide the model's style or continue a previous audio
segment.
For whisper-1, the prompt is a list of keywords.
For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology".
ConversationCreatedEvent = object { conversation, event_id, type } Returned when a conversation is created. Emitted right after session creation.
Returned when a conversation is created. Emitted right after session creation.
conversation: object { id, object } The conversation resource.
The conversation resource.
The unique ID of the conversation.
The object type, must be realtime.conversation.
The unique ID of the server event.
The event type, must be conversation.created.
ConversationItem = RealtimeConversationItemSystemMessage { content, role, type, 3 more } or RealtimeConversationItemUserMessage { content, role, type, 3 more } or RealtimeConversationItemAssistantMessage { content, role, type, 3 more } or 6 moreA single item within a Realtime conversation.
A single item within a Realtime conversation.
RealtimeConversationItemSystemMessage = object { content, role, type, 3 more } A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
content: array of object { text, type } The content of the message.
The content of the message.
The text content.
The content type. Always input_text for system messages.
The role of the message sender. Always system.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemUserMessage = object { content, role, type, 3 more } A user message item in a Realtime conversation.
A user message item in a Realtime conversation.
content: array of object { audio, detail, image_url, 3 more } The content of the message.
The content of the message.
Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
detail: optional "auto" or "low" or "high"The detail level of the image (for input_image). auto will default to high.
The detail level of the image (for input_image). auto will default to high.
Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.
The text content (for input_text).
Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.
type: optional "input_text" or "input_audio" or "input_image"The content type (input_text, input_audio, or input_image).
The content type (input_text, input_audio, or input_image).
The role of the message sender. Always user.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemAssistantMessage = object { content, role, type, 3 more } An assistant message item in a Realtime conversation.
An assistant message item in a Realtime conversation.
content: array of object { audio, text, transcript, type } The content of the message.
The content of the message.
Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
The text content.
The transcript of the audio content, this will always be present if the output type is audio.
type: optional "output_text" or "output_audio"The content type, output_text or output_audio depending on the session output_modalities configuration.
The content type, output_text or output_audio depending on the session output_modalities configuration.
The role of the message sender. Always assistant.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCall = object { arguments, name, type, 4 more } A function call item in a Realtime conversation.
A function call item in a Realtime conversation.
The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.
The name of the function being called.
The type of the item. Always function_call.
The unique ID of the item. This may be provided by the client or generated by the server.
The ID of the function call.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCallOutput = object { call_id, output, type, 3 more } A function call output item in a Realtime conversation.
A function call output item in a Realtime conversation.
The ID of the function call this output is for.
The output of the function call, this is free text and can contain any information or simply be empty.
The type of the item. Always function_call_output.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeMcpApprovalResponse = object { id, approval_request_id, approve, 2 more } A Realtime item responding to an MCP approval request.
A Realtime item responding to an MCP approval request.
The unique ID of the approval response.
The ID of the approval request being answered.
Whether the request was approved.
The type of the item. Always mcp_approval_response.
Optional reason for the decision.
RealtimeMcpListTools = object { server_label, tools, type, id } A Realtime item listing tools available on an MCP server.
A Realtime item listing tools available on an MCP server.
The label of the MCP server.
tools: array of object { input_schema, name, annotations, description } The tools available on the server.
The tools available on the server.
The JSON schema describing the tool's input.
The name of the tool.
Additional annotations about the tool.
The description of the tool.
The type of the item. Always mcp_list_tools.
The unique ID of the list.
RealtimeMcpToolCall = object { id, arguments, name, 5 more } A Realtime item representing an invocation of a tool on an MCP server.
A Realtime item representing an invocation of a tool on an MCP server.
The unique ID of the tool call.
A JSON string of the arguments passed to the tool.
The name of the tool that was run.
The label of the MCP server running the tool.
The type of the item. Always mcp_call.
The ID of an associated approval request, if any.
error: optional RealtimeMcpProtocolError { code, message, type } or RealtimeMcpToolExecutionError { message, type } or RealtimeMcphttpError { code, message, type } The error from the tool call, if any.
The error from the tool call, if any.
RealtimeMcpProtocolError = object { code, message, type }
RealtimeMcpToolExecutionError = object { message, type }
RealtimeMcphttpError = object { code, message, type }
The output from the tool call.
RealtimeMcpApprovalRequest = object { id, arguments, name, 2 more } A Realtime item requesting human approval of a tool invocation.
A Realtime item requesting human approval of a tool invocation.
The unique ID of the approval request.
A JSON string of arguments for the tool.
The name of the tool to run.
The label of the MCP server making the request.
The type of the item. Always mcp_approval_request.
ConversationItemAdded = object { event_id, item, type, previous_item_id } Sent by the server when an Item is added to the default Conversation. This can happen in several cases:
- When the client sends a
conversation.item.create event.
- When the input audio buffer is committed. In this case the item will be a user message containing the audio from the buffer.
- When the model is generating a Response. In this case the
conversation.item.added event will be sent when the model starts generating a specific Item, and thus it will not yet have any content (and status will be in_progress).
The event will include the full content of the Item (except when model is generating a Response) except for audio data, which can be retrieved separately with a conversation.item.retrieve event if necessary.
Sent by the server when an Item is added to the default Conversation. This can happen in several cases:
- When the client sends a
conversation.item.createevent. - When the input audio buffer is committed. In this case the item will be a user message containing the audio from the buffer.
- When the model is generating a Response. In this case the
conversation.item.addedevent will be sent when the model starts generating a specific Item, and thus it will not yet have any content (andstatuswill bein_progress).
The event will include the full content of the Item (except when model is generating a Response) except for audio data, which can be retrieved separately with a conversation.item.retrieve event if necessary.
The unique ID of the server event.
A single item within a Realtime conversation.
The event type, must be conversation.item.added.
The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.
ConversationItemCreateEvent = object { item, type, event_id, previous_item_id } Add a new Item to the Conversation's context, including messages, function
calls, and function call responses. This event can be used both to populate a
"history" of the conversation and to add new items mid-stream, but has the
current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a conversation.item.created
event, otherwise an error event will be sent.
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a conversation.item.created
event, otherwise an error event will be sent.
A single item within a Realtime conversation.
The event type, must be conversation.item.create.
Optional client-generated ID used to identify this event.
The ID of the preceding item after which the new item will be inserted. If not set, the new item will be appended to the end of the conversation.
If set to root, the new item will be added to the beginning of the conversation.
If set to an existing ID, it allows an item to be inserted mid-conversation. If the ID cannot be found, an error will be returned and the item will not be added.
ConversationItemCreatedEvent = object { event_id, item, type, previous_item_id } Returned when a conversation item is created. There are several scenarios that produce this event:
- The server is generating a Response, which if successful will produce
either one or two Items, which will be of type
message
(role assistant) or type function_call.
- The input audio buffer has been committed, either by the client or the
server (in
server_vad mode). The server will take the content of the
input audio buffer and add it to a new user message Item.
- The client has sent a
conversation.item.create event to add a new Item
to the Conversation.
Returned when a conversation item is created. There are several scenarios that produce this event:
- The server is generating a Response, which if successful will produce
either one or two Items, which will be of type
message(roleassistant) or typefunction_call. - The input audio buffer has been committed, either by the client or the
server (in
server_vadmode). The server will take the content of the input audio buffer and add it to a new user message Item. - The client has sent a
conversation.item.createevent to add a new Item to the Conversation.
The unique ID of the server event.
A single item within a Realtime conversation.
The event type, must be conversation.item.created.
The ID of the preceding item in the Conversation context, allows the
client to understand the order of the conversation. Can be null if the
item has no predecessor.
ConversationItemDeleteEvent = object { item_id, type, event_id } Send this event when you want to remove any item from the conversation
history. The server will respond with a conversation.item.deleted event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event when you want to remove any item from the conversation
history. The server will respond with a conversation.item.deleted event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
The ID of the item to delete.
The event type, must be conversation.item.delete.
Optional client-generated ID used to identify this event.
ConversationItemDeletedEvent = object { event_id, item_id, type } Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server's understanding of the conversation history with the client's view.
Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server's understanding of the conversation history with the client's view.
The unique ID of the server event.
The ID of the item that was deleted.
The event type, must be conversation.item.deleted.
ConversationItemDone = object { event_id, item, type, previous_item_id } Returned when a conversation item is finalized.
The event will include the full content of the Item except for audio data, which can be retrieved separately with a conversation.item.retrieve event if needed.
Returned when a conversation item is finalized.
The event will include the full content of the Item except for audio data, which can be retrieved separately with a conversation.item.retrieve event if needed.
The unique ID of the server event.
A single item within a Realtime conversation.
The event type, must be conversation.item.done.
The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.
ConversationItemInputAudioTranscriptionCompletedEvent = object { content_index, event_id, item_id, 4 more } This event is the output of audio transcription for user audio written to the
user audio buffer. Transcription begins when the input audio buffer is
committed by the client or server (when VAD is enabled). Transcription runs
asynchronously with Response creation, so this event may come before or after
the Response events.
Realtime API models accept audio natively, and thus input transcription is a
separate process run on a separate ASR (Automatic Speech Recognition) model.
The transcript may diverge somewhat from the model's interpretation, and
should be treated as a rough guide.
This event is the output of audio transcription for user audio written to the user audio buffer. Transcription begins when the input audio buffer is committed by the client or server (when VAD is enabled). Transcription runs asynchronously with Response creation, so this event may come before or after the Response events.
Realtime API models accept audio natively, and thus input transcription is a separate process run on a separate ASR (Automatic Speech Recognition) model. The transcript may diverge somewhat from the model's interpretation, and should be treated as a rough guide.
The index of the content part containing the audio.
The unique ID of the server event.
The ID of the item containing the audio that is being transcribed.
The transcribed text.
The event type, must be
conversation.item.input_audio_transcription.completed.
usage: object { input_tokens, output_tokens, total_tokens, 2 more } or object { seconds, type } Usage statistics for the transcription, this is billed according to the ASR model's pricing rather than the realtime model's pricing.
Usage statistics for the transcription, this is billed according to the ASR model's pricing rather than the realtime model's pricing.
TokenUsage = object { input_tokens, output_tokens, total_tokens, 2 more } Usage statistics for models billed by token usage.
Usage statistics for models billed by token usage.
Number of input tokens billed for this request.
Number of output tokens generated.
Total number of tokens used (input + output).
The type of the usage object. Always tokens for this variant.
input_token_details: optional object { audio_tokens, text_tokens } Details about the input tokens billed for this request.
Details about the input tokens billed for this request.
Number of audio tokens billed for this request.
Number of text tokens billed for this request.
DurationUsage = object { seconds, type } Usage statistics for models billed by audio input duration.
Usage statistics for models billed by audio input duration.
Duration of the input audio in seconds.
The type of the usage object. Always duration for this variant.
The log probabilities of the transcription.
The log probabilities of the transcription.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
ConversationItemInputAudioTranscriptionDeltaEvent = object { event_id, item_id, type, 3 more } Returned when the text value of an input audio transcription content part is updated with incremental transcription results.
Returned when the text value of an input audio transcription content part is updated with incremental transcription results.
The unique ID of the server event.
The ID of the item containing the audio that is being transcribed.
The event type, must be conversation.item.input_audio_transcription.delta.
The index of the content part in the item's content array.
The text delta.
The log probabilities of the transcription. These can be enabled by configurating the session with "include": ["item.input_audio_transcription.logprobs"]. Each entry in the array corresponds a log probability of which token would be selected for this chunk of transcription. This can help to identify if it was possible there were multiple valid options for a given chunk of transcription.
The log probabilities of the transcription. These can be enabled by configurating the session with "include": ["item.input_audio_transcription.logprobs"]. Each entry in the array corresponds a log probability of which token would be selected for this chunk of transcription. This can help to identify if it was possible there were multiple valid options for a given chunk of transcription.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
ConversationItemInputAudioTranscriptionFailedEvent = object { content_index, error, event_id, 2 more } Returned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.
Returned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.
The index of the content part containing the audio.
error: object { code, message, param, type } Details of the transcription error.
Details of the transcription error.
Error code, if any.
A human-readable error message.
Parameter related to the error, if any.
The type of error.
The unique ID of the server event.
The ID of the user message item.
The event type, must be
conversation.item.input_audio_transcription.failed.
ConversationItemInputAudioTranscriptionSegment = object { id, content_index, end, 6 more } Returned when an input audio transcription segment is identified for an item.
Returned when an input audio transcription segment is identified for an item.
The segment identifier.
The index of the input audio content part within the item.
End time of the segment in seconds.
The unique ID of the server event.
The ID of the item containing the input audio content.
The detected speaker label for this segment.
Start time of the segment in seconds.
The text for this segment.
The event type, must be conversation.item.input_audio_transcription.segment.
ConversationItemRetrieveEvent = object { item_id, type, event_id } Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD.
The server will respond with a conversation.item.retrieved event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD.
The server will respond with a conversation.item.retrieved event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
The ID of the item to retrieve.
The event type, must be conversation.item.retrieve.
Optional client-generated ID used to identify this event.
ConversationItemTruncateEvent = object { audio_end_ms, content_index, item_id, 2 more } Send this event to truncate a previous assistant message’s audio. The server
will produce audio faster than realtime, so this event is useful when the user
interrupts to truncate audio that has already been sent to the client but not
yet played. This will synchronize the server's understanding of the audio with
the client's playback.
Truncating audio will delete the server-side text transcript to ensure there
is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a conversation.item.truncated
event.
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a conversation.item.truncated
event.
Inclusive duration up to which audio is truncated, in milliseconds. If the audio_end_ms is greater than the actual audio duration, the server will respond with an error.
The index of the content part to truncate. Set this to 0.
The ID of the assistant message item to truncate. Only assistant message items can be truncated.
The event type, must be conversation.item.truncate.
Optional client-generated ID used to identify this event.
ConversationItemTruncatedEvent = object { audio_end_ms, content_index, event_id, 2 more } Returned when an earlier assistant audio message item is truncated by the
client with a conversation.item.truncate event. This event is used to
synchronize the server's understanding of the audio with the client's playback.
This action will truncate the audio and remove the server-side text transcript
to ensure there is no text in the context that hasn't been heard by the user.
Returned when an earlier assistant audio message item is truncated by the
client with a conversation.item.truncate event. This event is used to
synchronize the server's understanding of the audio with the client's playback.
This action will truncate the audio and remove the server-side text transcript to ensure there is no text in the context that hasn't been heard by the user.
The duration up to which the audio was truncated, in milliseconds.
The index of the content part that was truncated.
The unique ID of the server event.
The ID of the assistant message item that was truncated.
The event type, must be conversation.item.truncated.
ConversationItemWithReference = object { id, arguments, call_id, 7 more } The item to add to the conversation.
The item to add to the conversation.
For an item of type (message | function_call | function_call_output)
this field allows the client to assign the unique ID of the item. It is
not required because the server will generate one if not provided.
For an item of type item_reference, this field is required and is a
reference to any item that has previously existed in the conversation.
The arguments of the function call (for function_call items).
The ID of the function call (for function_call and
function_call_output items). If passed on a function_call_output
item, the server will check that a function_call item with the same
ID exists in the conversation history.
content: optional array of object { id, audio, text, 2 more } The content of the message, applicable for message items.
- Message items of role
system support only input_text content
- Message items of role
user support input_text and input_audio
content
- Message items of role
assistant support text content.
The content of the message, applicable for message items.
- Message items of role
systemsupport onlyinput_textcontent - Message items of role
usersupportinput_textandinput_audiocontent - Message items of role
assistantsupporttextcontent.
ID of a previous conversation item to reference (for item_reference
content types in response.create events). These can reference both
client and server created items.
Base64-encoded audio bytes, used for input_audio content type.
The text content, used for input_text and text content types.
The transcript of the audio, used for input_audio content type.
type: optional "input_audio" or "input_text" or "item_reference" or "text"The content type (input_text, input_audio, item_reference, text).
The content type (input_text, input_audio, item_reference, text).
The name of the function being called (for function_call items).
Identifier for the API object being returned - always realtime.item.
The output of the function call (for function_call_output items).
role: optional "user" or "assistant" or "system"The role of the message sender (user, assistant, system), only
applicable for message items.
The role of the message sender (user, assistant, system), only
applicable for message items.
status: optional "completed" or "incomplete" or "in_progress"The status of the item (completed, incomplete, in_progress). These have no effect
on the conversation, but are accepted for consistency with the
conversation.item.created event.
The status of the item (completed, incomplete, in_progress). These have no effect
on the conversation, but are accepted for consistency with the
conversation.item.created event.
type: optional "message" or "function_call" or "function_call_output"The type of the item (message, function_call, function_call_output, item_reference).
The type of the item (message, function_call, function_call_output, item_reference).
InputAudioBufferAppendEvent = object { audio, type, event_id } Send this event to append audio bytes to the input audio buffer. The audio
buffer is temporary storage you can write to and later commit. A "commit" will create a new
user message item in the conversation history from the buffer content and clear the buffer.
Input audio transcription (if enabled) will be generated when the buffer is committed.
If VAD is enabled the audio buffer is used to detect speech and the server will decide
when to commit. When Server VAD is disabled, you must commit the audio buffer
manually. Input audio noise reduction operates on writes to the audio buffer.
The client may choose how much audio to place in each event up to a maximum
of 15 MiB, for example streaming smaller chunks from the client may allow the
VAD to be more responsive. Unlike most other client events, the server will
not send a confirmation response to this event.
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. A "commit" will create a new user message item in the conversation history from the buffer content and clear the buffer. Input audio transcription (if enabled) will be generated when the buffer is committed.
If VAD is enabled the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually. Input audio noise reduction operates on writes to the audio buffer.
The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike most other client events, the server will not send a confirmation response to this event.
Base64-encoded audio bytes. This must be in the format specified by the
input_audio_format field in the session configuration.
The event type, must be input_audio_buffer.append.
Optional client-generated ID used to identify this event.
InputAudioBufferClearEvent = object { type, event_id } Send this event to clear the audio bytes in the buffer. The server will
respond with an input_audio_buffer.cleared event.
Send this event to clear the audio bytes in the buffer. The server will
respond with an input_audio_buffer.cleared event.
The event type, must be input_audio_buffer.clear.
Optional client-generated ID used to identify this event.
InputAudioBufferClearedEvent = object { event_id, type } Returned when the input audio buffer is cleared by the client with a
input_audio_buffer.clear event.
Returned when the input audio buffer is cleared by the client with a
input_audio_buffer.clear event.
The unique ID of the server event.
The event type, must be input_audio_buffer.cleared.
InputAudioBufferCommitEvent = object { type, event_id } Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.
The event type, must be input_audio_buffer.commit.
Optional client-generated ID used to identify this event.
InputAudioBufferCommittedEvent = object { event_id, item_id, type, previous_item_id } Returned when an input audio buffer is committed, either by the client or
automatically in server VAD mode. The item_id property is the ID of the user
message item that will be created, thus a conversation.item.created event
will also be sent to the client.
Returned when an input audio buffer is committed, either by the client or
automatically in server VAD mode. The item_id property is the ID of the user
message item that will be created, thus a conversation.item.created event
will also be sent to the client.
The unique ID of the server event.
The ID of the user message item that will be created.
The event type, must be input_audio_buffer.committed.
The ID of the preceding item after which the new item will be inserted.
Can be null if the item has no predecessor.
InputAudioBufferDtmfEventReceivedEvent = object { event, received_at, type } SIP Only: Returned when an DTMF event is received. A DTMF event is a message that
represents a telephone keypad press (0–9, *, #, A–D). The event property
is the keypad that the user press. The received_at is the UTC Unix Timestamp
that the server received the event.
SIP Only: Returned when an DTMF event is received. A DTMF event is a message that
represents a telephone keypad press (0–9, *, #, A–D). The event property
is the keypad that the user press. The received_at is the UTC Unix Timestamp
that the server received the event.
The telephone keypad that was pressed by the user.
UTC Unix Timestamp when DTMF Event was received by server.
The event type, must be input_audio_buffer.dtmf_event_received.
InputAudioBufferSpeechStartedEvent = object { audio_start_ms, event_id, item_id, type } Sent by the server when in server_vad mode to indicate that speech has been
detected in the audio buffer. This can happen any time audio is added to the
buffer (unless speech is already detected). The client may want to use this
event to interrupt audio playback or provide visual feedback to the user.
The client should expect to receive a input_audio_buffer.speech_stopped event
when speech stops. The item_id property is the ID of the user message item
that will be created when speech stops and will also be included in the
input_audio_buffer.speech_stopped event (unless the client manually commits
the audio buffer during VAD activation).
Sent by the server when in server_vad mode to indicate that speech has been
detected in the audio buffer. This can happen any time audio is added to the
buffer (unless speech is already detected). The client may want to use this
event to interrupt audio playback or provide visual feedback to the user.
The client should expect to receive a input_audio_buffer.speech_stopped event
when speech stops. The item_id property is the ID of the user message item
that will be created when speech stops and will also be included in the
input_audio_buffer.speech_stopped event (unless the client manually commits
the audio buffer during VAD activation).
Milliseconds from the start of all audio written to the buffer during the
session when speech was first detected. This will correspond to the
beginning of audio sent to the model, and thus includes the
prefix_padding_ms configured in the Session.
The unique ID of the server event.
The ID of the user message item that will be created when speech stops.
The event type, must be input_audio_buffer.speech_started.
InputAudioBufferSpeechStoppedEvent = object { audio_end_ms, event_id, item_id, type } Returned in server_vad mode when the server detects the end of speech in
the audio buffer. The server will also send an conversation.item.created
event with the user message item that is created from the audio buffer.
Returned in server_vad mode when the server detects the end of speech in
the audio buffer. The server will also send an conversation.item.created
event with the user message item that is created from the audio buffer.
Milliseconds since the session started when speech stopped. This will
correspond to the end of audio sent to the model, and thus includes the
min_silence_duration_ms configured in the Session.
The unique ID of the server event.
The ID of the user message item that will be created.
The event type, must be input_audio_buffer.speech_stopped.
InputAudioBufferTimeoutTriggered = object { audio_end_ms, audio_start_ms, event_id, 2 more } Returned when the Server VAD timeout is triggered for the input audio buffer. This is configured
with idle_timeout_ms in the turn_detection settings of the session, and it indicates that
there hasn't been any speech detected for the configured duration.
The audio_start_ms and audio_end_ms fields indicate the segment of audio after the last
model response up to the triggering time, as an offset from the beginning of audio written
to the input audio buffer. This means it demarcates the segment of audio that was silent and
the difference between the start and end values will roughly match the configured timeout.
The empty audio will be committed to the conversation as an input_audio item (there will be a
input_audio_buffer.committed event) and a model response will be generated. There may be speech
that didn't trigger VAD but is still detected by the model, so the model may respond with
something relevant to the conversation or a prompt to continue speaking.
Returned when the Server VAD timeout is triggered for the input audio buffer. This is configured
with idle_timeout_ms in the turn_detection settings of the session, and it indicates that
there hasn't been any speech detected for the configured duration.
The audio_start_ms and audio_end_ms fields indicate the segment of audio after the last
model response up to the triggering time, as an offset from the beginning of audio written
to the input audio buffer. This means it demarcates the segment of audio that was silent and
the difference between the start and end values will roughly match the configured timeout.
The empty audio will be committed to the conversation as an input_audio item (there will be a
input_audio_buffer.committed event) and a model response will be generated. There may be speech
that didn't trigger VAD but is still detected by the model, so the model may respond with
something relevant to the conversation or a prompt to continue speaking.
Millisecond offset of audio written to the input audio buffer at the time the timeout was triggered.
Millisecond offset of audio written to the input audio buffer that was after the playback time of the last model response.
The unique ID of the server event.
The ID of the item associated with this segment.
The event type, must be input_audio_buffer.timeout_triggered.
LogProbProperties = object { token, bytes, logprob } A log probability object.
A log probability object.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
McpListToolsCompleted = object { event_id, item_id, type } Returned when listing MCP tools has completed for an item.
Returned when listing MCP tools has completed for an item.
The unique ID of the server event.
The ID of the MCP list tools item.
The event type, must be mcp_list_tools.completed.
McpListToolsFailed = object { event_id, item_id, type } Returned when listing MCP tools has failed for an item.
Returned when listing MCP tools has failed for an item.
The unique ID of the server event.
The ID of the MCP list tools item.
The event type, must be mcp_list_tools.failed.
McpListToolsInProgress = object { event_id, item_id, type } Returned when listing MCP tools is in progress for an item.
Returned when listing MCP tools is in progress for an item.
The unique ID of the server event.
The ID of the MCP list tools item.
The event type, must be mcp_list_tools.in_progress.
NoiseReductionType = "near_field" or "far_field"Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
OutputAudioBufferClearEvent = object { type, event_id } WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to
stop generating audio and emit a output_audio_buffer.cleared event. This
event should be preceded by a response.cancel client event to stop the
generation of the current response.
Learn more.
WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to
stop generating audio and emit a output_audio_buffer.cleared event. This
event should be preceded by a response.cancel client event to stop the
generation of the current response.
Learn more.
The event type, must be output_audio_buffer.clear.
The unique ID of the client event used for error handling.
RateLimitsUpdatedEvent = object { event_id, rate_limits, type } Emitted at the beginning of a Response to indicate the updated rate limits.
When a Response is created some tokens will be "reserved" for the output
tokens, the rate limits shown here reflect that reservation, which is then
adjusted accordingly once the Response is completed.
Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be "reserved" for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.
The unique ID of the server event.
rate_limits: array of object { limit, name, remaining, reset_seconds } List of rate limit information.
List of rate limit information.
The maximum allowed value for the rate limit.
name: optional "requests" or "tokens"The name of the rate limit (requests, tokens).
The name of the rate limit (requests, tokens).
The remaining value before the limit is reached.
Seconds until the rate limit resets.
The event type, must be rate_limits.updated.
RealtimeAudioConfig = object { input, output } Configuration for input and output audio.
Configuration for input and output audio.
RealtimeAudioConfigInput = object { format, noise_reduction, transcription, turn_detection }
The format of the input audio.
noise_reduction: optional object { type } Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
RealtimeAudioConfigOutput = object { format, speed, voice }
The format of the output audio.
The speed of the model's spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.
This parameter is a post-processing adjustment to the audio after it is generated, it's also possible to prompt the model to speak faster or slower.
voice: optional string or "alloy" or "ash" or "ballad" or 7 more or object { id } The voice the model uses to respond. Supported built-in voices are
alloy, ash, ballad, coral, echo, sage, shimmer, verse,
marin, and cedar. You may also provide a custom voice object with
an id, for example { "id": "voice_1234" }. Voice cannot be changed
during the session once the model has responded with audio at least once.
We recommend marin and cedar for best quality.
The voice the model uses to respond. Supported built-in voices are
alloy, ash, ballad, coral, echo, sage, shimmer, verse,
marin, and cedar. You may also provide a custom voice object with
an id, for example { "id": "voice_1234" }. Voice cannot be changed
during the session once the model has responded with audio at least once.
We recommend marin and cedar for best quality.
VoiceIDsShared = string or "alloy" or "ash" or "ballad" or 7 more
UnionMember1 = "alloy" or "ash" or "ballad" or 7 more
ID = object { id } Custom voice reference.
Custom voice reference.
The custom voice ID, e.g. voice_1234.
RealtimeAudioFormats = object { rate, type } or object { type } or object { type } The PCM audio format. Only a 24kHz sample rate is supported.
The PCM audio format. Only a 24kHz sample rate is supported.
PCMAudioFormat = object { rate, type } The PCM audio format. Only a 24kHz sample rate is supported.
The PCM audio format. Only a 24kHz sample rate is supported.
The sample rate of the audio. Always 24000.
The audio format. Always audio/pcm.
PCMUAudioFormat = object { type } The G.711 μ-law format.
The G.711 μ-law format.
The audio format. Always audio/pcmu.
PCMAAudioFormat = object { type } The G.711 A-law format.
The G.711 A-law format.
The audio format. Always audio/pcma.
RealtimeAudioInputTurnDetection = object { type, create_response, idle_timeout_ms, 4 more } or object { type, create_response, eagerness, interrupt_response } Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
ServerVad = object { type, create_response, idle_timeout_ms, 4 more } Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Type of turn detection, server_vad to turn on simple Server VAD.
Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.
The timeout value will be applied after the last model response's audio has finished playing,
i.e. it's set to the response.done time plus audio playback duration.
An input_audio_buffer.timeout_triggered event (plus events
associated with the Response) will be emitted when the timeout is reached.
Idle timeout is currently only supported for server_vad mode.
Whether or not to automatically interrupt (cancel) any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in
milliseconds). Defaults to 300ms.
Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults
to 500ms. With shorter values the model will respond more quickly,
but may jump in on short pauses from the user.
Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A
higher threshold will require louder audio to activate the model, and
thus might perform better in noisy environments.
SemanticVad = object { type, create_response, eagerness, interrupt_response } Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Type of turn detection, semantic_vad to turn on Semantic VAD.
Whether or not to automatically generate a response when a VAD stop event occurs.
eagerness: optional "low" or "medium" or "high" or "auto"Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Whether or not to automatically interrupt any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs.
RealtimeClientEvent = ConversationItemCreateEvent { item, type, event_id, previous_item_id } or ConversationItemDeleteEvent { item_id, type, event_id } or ConversationItemRetrieveEvent { item_id, type, event_id } or 8 moreA realtime client event.
A realtime client event.
ConversationItemCreateEvent = object { item, type, event_id, previous_item_id } Add a new Item to the Conversation's context, including messages, function
calls, and function call responses. This event can be used both to populate a
"history" of the conversation and to add new items mid-stream, but has the
current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a conversation.item.created
event, otherwise an error event will be sent.
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a conversation.item.created
event, otherwise an error event will be sent.
A single item within a Realtime conversation.
The event type, must be conversation.item.create.
Optional client-generated ID used to identify this event.
The ID of the preceding item after which the new item will be inserted. If not set, the new item will be appended to the end of the conversation.
If set to root, the new item will be added to the beginning of the conversation.
If set to an existing ID, it allows an item to be inserted mid-conversation. If the ID cannot be found, an error will be returned and the item will not be added.
ConversationItemDeleteEvent = object { item_id, type, event_id } Send this event when you want to remove any item from the conversation
history. The server will respond with a conversation.item.deleted event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event when you want to remove any item from the conversation
history. The server will respond with a conversation.item.deleted event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
The ID of the item to delete.
The event type, must be conversation.item.delete.
Optional client-generated ID used to identify this event.
ConversationItemRetrieveEvent = object { item_id, type, event_id } Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD.
The server will respond with a conversation.item.retrieved event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD.
The server will respond with a conversation.item.retrieved event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
The ID of the item to retrieve.
The event type, must be conversation.item.retrieve.
Optional client-generated ID used to identify this event.
ConversationItemTruncateEvent = object { audio_end_ms, content_index, item_id, 2 more } Send this event to truncate a previous assistant message’s audio. The server
will produce audio faster than realtime, so this event is useful when the user
interrupts to truncate audio that has already been sent to the client but not
yet played. This will synchronize the server's understanding of the audio with
the client's playback.
Truncating audio will delete the server-side text transcript to ensure there
is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a conversation.item.truncated
event.
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a conversation.item.truncated
event.
Inclusive duration up to which audio is truncated, in milliseconds. If the audio_end_ms is greater than the actual audio duration, the server will respond with an error.
The index of the content part to truncate. Set this to 0.
The ID of the assistant message item to truncate. Only assistant message items can be truncated.
The event type, must be conversation.item.truncate.
Optional client-generated ID used to identify this event.
InputAudioBufferAppendEvent = object { audio, type, event_id } Send this event to append audio bytes to the input audio buffer. The audio
buffer is temporary storage you can write to and later commit. A "commit" will create a new
user message item in the conversation history from the buffer content and clear the buffer.
Input audio transcription (if enabled) will be generated when the buffer is committed.
If VAD is enabled the audio buffer is used to detect speech and the server will decide
when to commit. When Server VAD is disabled, you must commit the audio buffer
manually. Input audio noise reduction operates on writes to the audio buffer.
The client may choose how much audio to place in each event up to a maximum
of 15 MiB, for example streaming smaller chunks from the client may allow the
VAD to be more responsive. Unlike most other client events, the server will
not send a confirmation response to this event.
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. A "commit" will create a new user message item in the conversation history from the buffer content and clear the buffer. Input audio transcription (if enabled) will be generated when the buffer is committed.
If VAD is enabled the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually. Input audio noise reduction operates on writes to the audio buffer.
The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike most other client events, the server will not send a confirmation response to this event.
Base64-encoded audio bytes. This must be in the format specified by the
input_audio_format field in the session configuration.
The event type, must be input_audio_buffer.append.
Optional client-generated ID used to identify this event.
InputAudioBufferClearEvent = object { type, event_id } Send this event to clear the audio bytes in the buffer. The server will
respond with an input_audio_buffer.cleared event.
Send this event to clear the audio bytes in the buffer. The server will
respond with an input_audio_buffer.cleared event.
The event type, must be input_audio_buffer.clear.
Optional client-generated ID used to identify this event.
OutputAudioBufferClearEvent = object { type, event_id } WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to
stop generating audio and emit a output_audio_buffer.cleared event. This
event should be preceded by a response.cancel client event to stop the
generation of the current response.
Learn more.
WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to
stop generating audio and emit a output_audio_buffer.cleared event. This
event should be preceded by a response.cancel client event to stop the
generation of the current response.
Learn more.
The event type, must be output_audio_buffer.clear.
The unique ID of the client event used for error handling.
InputAudioBufferCommitEvent = object { type, event_id } Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.
The event type, must be input_audio_buffer.commit.
Optional client-generated ID used to identify this event.
ResponseCancelEvent = object { type, event_id, response_id } Send this event to cancel an in-progress response. The server will respond
with a response.done event with a status of response.status=cancelled. If
there is no response to cancel, the server will respond with an error. It's safe
to call response.cancel even if no response is in progress, an error will be
returned the session will remain unaffected.
Send this event to cancel an in-progress response. The server will respond
with a response.done event with a status of response.status=cancelled. If
there is no response to cancel, the server will respond with an error. It's safe
to call response.cancel even if no response is in progress, an error will be
returned the session will remain unaffected.
The event type, must be response.cancel.
Optional client-generated ID used to identify this event.
A specific response ID to cancel - if not provided, will cancel an in-progress response in the default conversation.
ResponseCreateEvent = object { type, event_id, response } This event instructs the server to create a Response, which means triggering
model inference. When in Server VAD mode, the server will create Responses
automatically.
A Response will include at least one Item, and may have two, in which case
the second will be a function call. These Items will be appended to the
conversation history by default.
The server will respond with a response.created event, events for Items
and content created, and finally a response.done event to indicate the
Response is complete.
The response.create event includes inference configuration like
instructions and tools. If these are set, they will override the Session's
configuration for this Response only.
Responses can be created out-of-band of the default Conversation, meaning that they can
have arbitrary input, and it's possible to disable writing the output to the Conversation.
Only one Response can write to the default Conversation at a time, but otherwise multiple
Responses can be created in parallel. The metadata field is a good way to disambiguate
multiple simultaneous Responses.
Clients can set conversation to none to create a Response that does not write to the default
Conversation. Arbitrary input can be provided with the input field, which is an array accepting
raw Items and references to existing Items.
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history by default.
The server will respond with a response.created event, events for Items
and content created, and finally a response.done event to indicate the
Response is complete.
The response.create event includes inference configuration like
instructions and tools. If these are set, they will override the Session's
configuration for this Response only.
Responses can be created out-of-band of the default Conversation, meaning that they can
have arbitrary input, and it's possible to disable writing the output to the Conversation.
Only one Response can write to the default Conversation at a time, but otherwise multiple
Responses can be created in parallel. The metadata field is a good way to disambiguate
multiple simultaneous Responses.
Clients can set conversation to none to create a Response that does not write to the default
Conversation. Arbitrary input can be provided with the input field, which is an array accepting
raw Items and references to existing Items.
The event type, must be response.create.
Optional client-generated ID used to identify this event.
Create a new Realtime response with these parameters
SessionUpdateEvent = object { session, type, event_id } Send this event to update the session’s configuration.
The client may send this event at any time to update any field
except for voice and model. voice can be updated only if there have been no other audio outputs yet.
When the server receives a session.update, it will respond
with a session.updated event showing the full, effective configuration.
Only the fields that are present in the session.update are updated. To clear a field like
instructions, pass an empty string. To clear a field like tools, pass an empty array.
To clear a field like turn_detection, pass null.
Send this event to update the session’s configuration.
The client may send this event at any time to update any field
except for voice and model. voice can be updated only if there have been no other audio outputs yet.
When the server receives a session.update, it will respond
with a session.updated event showing the full, effective configuration.
Only the fields that are present in the session.update are updated. To clear a field like
instructions, pass an empty string. To clear a field like tools, pass an empty array.
To clear a field like turn_detection, pass null.
session: RealtimeSessionCreateRequest { type, audio, include, 9 more } or RealtimeTranscriptionSessionCreateRequest { type, audio, include } Update the Realtime session. Choose either a realtime
session or a transcription session.
Update the Realtime session. Choose either a realtime session or a transcription session.
RealtimeSessionCreateRequest = object { type, audio, include, 9 more } Realtime session object configuration.
Realtime session object configuration.
The type of session to create. Always realtime for the Realtime API.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
Tools available to the model.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeTranscriptionSessionCreateRequest = object { type, audio, include } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The type of session to create. Always transcription for transcription sessions.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The event type, must be session.update.
Optional client-generated ID used to identify this event. This is an arbitrary string that a client may assign. It will be passed back if there is an error with the event, but the corresponding session.updated event will not include it.
RealtimeConversationItemAssistantMessage = object { content, role, type, 3 more } An assistant message item in a Realtime conversation.
An assistant message item in a Realtime conversation.
content: array of object { audio, text, transcript, type } The content of the message.
The content of the message.
Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
The text content.
The transcript of the audio content, this will always be present if the output type is audio.
type: optional "output_text" or "output_audio"The content type, output_text or output_audio depending on the session output_modalities configuration.
The content type, output_text or output_audio depending on the session output_modalities configuration.
The role of the message sender. Always assistant.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCall = object { arguments, name, type, 4 more } A function call item in a Realtime conversation.
A function call item in a Realtime conversation.
The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.
The name of the function being called.
The type of the item. Always function_call.
The unique ID of the item. This may be provided by the client or generated by the server.
The ID of the function call.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCallOutput = object { call_id, output, type, 3 more } A function call output item in a Realtime conversation.
A function call output item in a Realtime conversation.
The ID of the function call this output is for.
The output of the function call, this is free text and can contain any information or simply be empty.
The type of the item. Always function_call_output.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemSystemMessage = object { content, role, type, 3 more } A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
content: array of object { text, type } The content of the message.
The content of the message.
The text content.
The content type. Always input_text for system messages.
The role of the message sender. Always system.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemUserMessage = object { content, role, type, 3 more } A user message item in a Realtime conversation.
A user message item in a Realtime conversation.
content: array of object { audio, detail, image_url, 3 more } The content of the message.
The content of the message.
Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
detail: optional "auto" or "low" or "high"The detail level of the image (for input_image). auto will default to high.
The detail level of the image (for input_image). auto will default to high.
Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.
The text content (for input_text).
Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.
type: optional "input_text" or "input_audio" or "input_image"The content type (input_text, input_audio, or input_image).
The content type (input_text, input_audio, or input_image).
The role of the message sender. Always user.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeError = object { message, type, code, 2 more } Details of the error.
Details of the error.
A human-readable error message.
The type of error (e.g., "invalid_request_error", "server_error").
Error code, if any.
The event_id of the client event that caused the error, if applicable.
Parameter related to the error, if any.
RealtimeErrorEvent = object { error, event_id, type } Returned when an error occurs, which could be a client problem or a server
problem. Most errors are recoverable and the session will stay open, we
recommend to implementors to monitor and log error messages by default.
Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.
Details of the error.
The unique ID of the server event.
The event type, must be error.
RealtimeFunctionTool = object { description, name, parameters, type }
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).
The name of the function.
Parameters of the function in JSON Schema.
The type of the tool, i.e. function.
RealtimeMcpApprovalRequest = object { id, arguments, name, 2 more } A Realtime item requesting human approval of a tool invocation.
A Realtime item requesting human approval of a tool invocation.
The unique ID of the approval request.
A JSON string of arguments for the tool.
The name of the tool to run.
The label of the MCP server making the request.
The type of the item. Always mcp_approval_request.
RealtimeMcpApprovalResponse = object { id, approval_request_id, approve, 2 more } A Realtime item responding to an MCP approval request.
A Realtime item responding to an MCP approval request.
The unique ID of the approval response.
The ID of the approval request being answered.
Whether the request was approved.
The type of the item. Always mcp_approval_response.
Optional reason for the decision.
RealtimeMcpListTools = object { server_label, tools, type, id } A Realtime item listing tools available on an MCP server.
A Realtime item listing tools available on an MCP server.
The label of the MCP server.
tools: array of object { input_schema, name, annotations, description } The tools available on the server.
The tools available on the server.
The JSON schema describing the tool's input.
The name of the tool.
Additional annotations about the tool.
The description of the tool.
The type of the item. Always mcp_list_tools.
The unique ID of the list.
RealtimeMcpProtocolError = object { code, message, type }
RealtimeMcpToolCall = object { id, arguments, name, 5 more } A Realtime item representing an invocation of a tool on an MCP server.
A Realtime item representing an invocation of a tool on an MCP server.
The unique ID of the tool call.
A JSON string of the arguments passed to the tool.
The name of the tool that was run.
The label of the MCP server running the tool.
The type of the item. Always mcp_call.
The ID of an associated approval request, if any.
error: optional RealtimeMcpProtocolError { code, message, type } or RealtimeMcpToolExecutionError { message, type } or RealtimeMcphttpError { code, message, type } The error from the tool call, if any.
The error from the tool call, if any.
RealtimeMcpProtocolError = object { code, message, type }
RealtimeMcpToolExecutionError = object { message, type }
RealtimeMcphttpError = object { code, message, type }
The output from the tool call.
RealtimeMcpToolExecutionError = object { message, type }
RealtimeMcphttpError = object { code, message, type }
RealtimeResponse = object { id, audio, conversation_id, 8 more } The response resource.
The response resource.
The unique ID of the response, will look like resp_1234.
audio: optional object { output } Configuration for audio output.
Configuration for audio output.
output: optional object { format, voice }
The format of the output audio.
voice: optional string or "alloy" or "ash" or "ballad" or 7 moreThe voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
The voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
UnionMember1 = "alloy" or "ash" or "ballad" or 7 moreThe voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
The voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
Which conversation the response is added to, determined by the conversation
field in the response.create event. If auto, the response will be added to
the default conversation and the value of conversation_id will be an id like
conv_1234. If none, the response will not be added to any conversation and
the value of conversation_id will be null. If responses are being triggered
automatically by VAD the response will be added to the default conversation
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls, that was used in this response.
Maximum number of output tokens for a single assistant response, inclusive of tool calls, that was used in this response.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
The object type, must be realtime.response.
The list of output items generated by the response.
The list of output items generated by the response.
RealtimeConversationItemSystemMessage = object { content, role, type, 3 more } A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
content: array of object { text, type } The content of the message.
The content of the message.
The text content.
The content type. Always input_text for system messages.
The role of the message sender. Always system.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemUserMessage = object { content, role, type, 3 more } A user message item in a Realtime conversation.
A user message item in a Realtime conversation.
content: array of object { audio, detail, image_url, 3 more } The content of the message.
The content of the message.
Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
detail: optional "auto" or "low" or "high"The detail level of the image (for input_image). auto will default to high.
The detail level of the image (for input_image). auto will default to high.
Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.
The text content (for input_text).
Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.
type: optional "input_text" or "input_audio" or "input_image"The content type (input_text, input_audio, or input_image).
The content type (input_text, input_audio, or input_image).
The role of the message sender. Always user.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemAssistantMessage = object { content, role, type, 3 more } An assistant message item in a Realtime conversation.
An assistant message item in a Realtime conversation.
content: array of object { audio, text, transcript, type } The content of the message.
The content of the message.
Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
The text content.
The transcript of the audio content, this will always be present if the output type is audio.
type: optional "output_text" or "output_audio"The content type, output_text or output_audio depending on the session output_modalities configuration.
The content type, output_text or output_audio depending on the session output_modalities configuration.
The role of the message sender. Always assistant.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCall = object { arguments, name, type, 4 more } A function call item in a Realtime conversation.
A function call item in a Realtime conversation.
The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.
The name of the function being called.
The type of the item. Always function_call.
The unique ID of the item. This may be provided by the client or generated by the server.
The ID of the function call.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCallOutput = object { call_id, output, type, 3 more } A function call output item in a Realtime conversation.
A function call output item in a Realtime conversation.
The ID of the function call this output is for.
The output of the function call, this is free text and can contain any information or simply be empty.
The type of the item. Always function_call_output.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeMcpApprovalResponse = object { id, approval_request_id, approve, 2 more } A Realtime item responding to an MCP approval request.
A Realtime item responding to an MCP approval request.
The unique ID of the approval response.
The ID of the approval request being answered.
Whether the request was approved.
The type of the item. Always mcp_approval_response.
Optional reason for the decision.
RealtimeMcpListTools = object { server_label, tools, type, id } A Realtime item listing tools available on an MCP server.
A Realtime item listing tools available on an MCP server.
The label of the MCP server.
tools: array of object { input_schema, name, annotations, description } The tools available on the server.
The tools available on the server.
The JSON schema describing the tool's input.
The name of the tool.
Additional annotations about the tool.
The description of the tool.
The type of the item. Always mcp_list_tools.
The unique ID of the list.
RealtimeMcpToolCall = object { id, arguments, name, 5 more } A Realtime item representing an invocation of a tool on an MCP server.
A Realtime item representing an invocation of a tool on an MCP server.
The unique ID of the tool call.
A JSON string of the arguments passed to the tool.
The name of the tool that was run.
The label of the MCP server running the tool.
The type of the item. Always mcp_call.
The ID of an associated approval request, if any.
error: optional RealtimeMcpProtocolError { code, message, type } or RealtimeMcpToolExecutionError { message, type } or RealtimeMcphttpError { code, message, type } The error from the tool call, if any.
The error from the tool call, if any.
RealtimeMcpProtocolError = object { code, message, type }
RealtimeMcpToolExecutionError = object { message, type }
RealtimeMcphttpError = object { code, message, type }
The output from the tool call.
RealtimeMcpApprovalRequest = object { id, arguments, name, 2 more } A Realtime item requesting human approval of a tool invocation.
A Realtime item requesting human approval of a tool invocation.
The unique ID of the approval request.
A JSON string of arguments for the tool.
The name of the tool to run.
The label of the MCP server making the request.
The type of the item. Always mcp_approval_request.
output_modalities: optional array of "text" or "audio"The set of modalities the model used to respond, currently the only possible values are
[\"audio\"], [\"text\"]. Audio output always include a text transcript. Setting the
output to mode text will disable audio output from the model.
The set of modalities the model used to respond, currently the only possible values are
[\"audio\"], [\"text\"]. Audio output always include a text transcript. Setting the
output to mode text will disable audio output from the model.
status: optional "completed" or "cancelled" or "failed" or 2 moreThe final status of the response (completed, cancelled, failed, or
incomplete, in_progress).
The final status of the response (completed, cancelled, failed, or
incomplete, in_progress).
Additional details about the status.
Usage statistics for the Response, this will correspond to billing. A Realtime API session will maintain a conversation context and append new Items to the Conversation, thus output from previous turns (text and audio tokens) will become the input for later turns.
RealtimeResponseCreateAudioOutput = object { output } Configuration for audio input and output.
Configuration for audio input and output.
output: optional object { format, voice }
The format of the output audio.
voice: optional string or "alloy" or "ash" or "ballad" or 7 more or object { id } The voice the model uses to respond. Supported built-in voices are
alloy, ash, ballad, coral, echo, sage, shimmer, verse,
marin, and cedar. You may also provide a custom voice object with
an id, for example { "id": "voice_1234" }. Voice cannot be changed
during the session once the model has responded with audio at least once.
We recommend marin and cedar for best quality.
The voice the model uses to respond. Supported built-in voices are
alloy, ash, ballad, coral, echo, sage, shimmer, verse,
marin, and cedar. You may also provide a custom voice object with
an id, for example { "id": "voice_1234" }. Voice cannot be changed
during the session once the model has responded with audio at least once.
We recommend marin and cedar for best quality.
VoiceIDsShared = string or "alloy" or "ash" or "ballad" or 7 more
UnionMember1 = "alloy" or "ash" or "ballad" or 7 more
ID = object { id } Custom voice reference.
Custom voice reference.
The custom voice ID, e.g. voice_1234.
RealtimeResponseCreateParams = object { audio, conversation, input, 7 more } Create a new Realtime response with these parameters
Create a new Realtime response with these parameters
Configuration for audio input and output.
conversation: optional string or "auto" or "none"Controls which conversation the response is added to. Currently supports
auto and none, with auto as the default value. The auto value
means that the contents of the response will be added to the default
conversation. Set this to none to create an out-of-band response which
will not add items to default conversation.
Controls which conversation the response is added to. Currently supports
auto and none, with auto as the default value. The auto value
means that the contents of the response will be added to the default
conversation. Set this to none to create an out-of-band response which
will not add items to default conversation.
UnionMember1 = "auto" or "none"Controls which conversation the response is added to. Currently supports
auto and none, with auto as the default value. The auto value
means that the contents of the response will be added to the default
conversation. Set this to none to create an out-of-band response which
will not add items to default conversation.
Controls which conversation the response is added to. Currently supports
auto and none, with auto as the default value. The auto value
means that the contents of the response will be added to the default
conversation. Set this to none to create an out-of-band response which
will not add items to default conversation.
Input items to include in the prompt for the model. Using this field
creates a new context for this Response instead of using the default
conversation. An empty array [] will clear the context for this Response.
Note that this can include references to items that previously appeared in the session
using their id.
Input items to include in the prompt for the model. Using this field
creates a new context for this Response instead of using the default
conversation. An empty array [] will clear the context for this Response.
Note that this can include references to items that previously appeared in the session
using their id.
RealtimeConversationItemSystemMessage = object { content, role, type, 3 more } A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
content: array of object { text, type } The content of the message.
The content of the message.
The text content.
The content type. Always input_text for system messages.
The role of the message sender. Always system.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemUserMessage = object { content, role, type, 3 more } A user message item in a Realtime conversation.
A user message item in a Realtime conversation.
content: array of object { audio, detail, image_url, 3 more } The content of the message.
The content of the message.
Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
detail: optional "auto" or "low" or "high"The detail level of the image (for input_image). auto will default to high.
The detail level of the image (for input_image). auto will default to high.
Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.
The text content (for input_text).
Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.
type: optional "input_text" or "input_audio" or "input_image"The content type (input_text, input_audio, or input_image).
The content type (input_text, input_audio, or input_image).
The role of the message sender. Always user.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemAssistantMessage = object { content, role, type, 3 more } An assistant message item in a Realtime conversation.
An assistant message item in a Realtime conversation.
content: array of object { audio, text, transcript, type } The content of the message.
The content of the message.
Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
The text content.
The transcript of the audio content, this will always be present if the output type is audio.
type: optional "output_text" or "output_audio"The content type, output_text or output_audio depending on the session output_modalities configuration.
The content type, output_text or output_audio depending on the session output_modalities configuration.
The role of the message sender. Always assistant.
The type of the item. Always message.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCall = object { arguments, name, type, 4 more } A function call item in a Realtime conversation.
A function call item in a Realtime conversation.
The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.
The name of the function being called.
The type of the item. Always function_call.
The unique ID of the item. This may be provided by the client or generated by the server.
The ID of the function call.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeConversationItemFunctionCallOutput = object { call_id, output, type, 3 more } A function call output item in a Realtime conversation.
A function call output item in a Realtime conversation.
The ID of the function call this output is for.
The output of the function call, this is free text and can contain any information or simply be empty.
The type of the item. Always function_call_output.
The unique ID of the item. This may be provided by the client or generated by the server.
Identifier for the API object being returned - always realtime.item. Optional when creating a new item.
status: optional "completed" or "incomplete" or "in_progress"The status of the item. Has no effect on the conversation.
The status of the item. Has no effect on the conversation.
RealtimeMcpApprovalResponse = object { id, approval_request_id, approve, 2 more } A Realtime item responding to an MCP approval request.
A Realtime item responding to an MCP approval request.
The unique ID of the approval response.
The ID of the approval request being answered.
Whether the request was approved.
The type of the item. Always mcp_approval_response.
Optional reason for the decision.
RealtimeMcpListTools = object { server_label, tools, type, id } A Realtime item listing tools available on an MCP server.
A Realtime item listing tools available on an MCP server.
The label of the MCP server.
tools: array of object { input_schema, name, annotations, description } The tools available on the server.
The tools available on the server.
The JSON schema describing the tool's input.
The name of the tool.
Additional annotations about the tool.
The description of the tool.
The type of the item. Always mcp_list_tools.
The unique ID of the list.
RealtimeMcpToolCall = object { id, arguments, name, 5 more } A Realtime item representing an invocation of a tool on an MCP server.
A Realtime item representing an invocation of a tool on an MCP server.
The unique ID of the tool call.
A JSON string of the arguments passed to the tool.
The name of the tool that was run.
The label of the MCP server running the tool.
The type of the item. Always mcp_call.
The ID of an associated approval request, if any.
error: optional RealtimeMcpProtocolError { code, message, type } or RealtimeMcpToolExecutionError { message, type } or RealtimeMcphttpError { code, message, type } The error from the tool call, if any.
The error from the tool call, if any.
RealtimeMcpProtocolError = object { code, message, type }
RealtimeMcpToolExecutionError = object { message, type }
RealtimeMcphttpError = object { code, message, type }
The output from the tool call.
RealtimeMcpApprovalRequest = object { id, arguments, name, 2 more } A Realtime item requesting human approval of a tool invocation.
A Realtime item requesting human approval of a tool invocation.
The unique ID of the approval request.
A JSON string of arguments for the tool.
The name of the tool to run.
The label of the MCP server making the request.
The type of the item. Always mcp_approval_request.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
output_modalities: optional array of "text" or "audio"The set of modalities the model used to respond, currently the only possible values are
[\"audio\"], [\"text\"]. Audio output always include a text transcript. Setting the
output to mode text will disable audio output from the model.
The set of modalities the model used to respond, currently the only possible values are
[\"audio\"], [\"text\"]. Audio output always include a text transcript. Setting the
output to mode text will disable audio output from the model.
Reference to a prompt template and its variables. Learn more.
tool_choice: optional ToolChoiceOptions or ToolChoiceFunction { name, type } or ToolChoiceMcp { server_label, type, name } How the model chooses tools. Provide one of the string modes or force a specific
function/MCP tool.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
ToolChoiceOptions = "none" or "auto" or "required"Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
ToolChoiceFunction = object { name, type } Use this option to force the model to call a specific function.
Use this option to force the model to call a specific function.
The name of the function to call.
For function calling, the type is always function.
ToolChoiceMcp = object { server_label, type, name } Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific tool on a remote MCP server.
The label of the MCP server to use.
For MCP tools, the type is always mcp.
The name of the tool to call on the server.
tools: optional array of RealtimeFunctionTool { description, name, parameters, type } or object { server_label, type, allowed_tools, 6 more } Tools available to the model.
Tools available to the model.
RealtimeFunctionTool = object { description, name, parameters, type }
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).
The name of the function.
Parameters of the function in JSON Schema.
The type of the tool, i.e. function.
McpTool = object { server_label, type, allowed_tools, 6 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
A label for this MCP server, used to identify it in tool calls.
The type of the MCP tool. Always mcp.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
A string array of allowed tool names
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
McpToolApprovalSetting = "always" or "never"Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Optional description of the MCP server, used to provide more context.
The URL for the MCP server. One of server_url or connector_id must be
provided.
RealtimeResponseStatus = object { error, reason, type } Additional details about the status.
Additional details about the status.
error: optional object { code, type } A description of the error that caused the response to fail,
populated when the status is failed.
A description of the error that caused the response to fail,
populated when the status is failed.
Error code, if any.
The type of error.
reason: optional "turn_detected" or "client_cancelled" or "max_output_tokens" or "content_filter"The reason the Response did not complete. For a cancelled Response, one of turn_detected (the server VAD detected a new start of speech) or client_cancelled (the client sent a cancel event). For an incomplete Response, one of max_output_tokens or content_filter (the server-side safety filter activated and cut off the response).
The reason the Response did not complete. For a cancelled Response, one of turn_detected (the server VAD detected a new start of speech) or client_cancelled (the client sent a cancel event). For an incomplete Response, one of max_output_tokens or content_filter (the server-side safety filter activated and cut off the response).
type: optional "completed" or "cancelled" or "failed" or "incomplete"The type of error that caused the response to fail, corresponding
with the status field (completed, cancelled, incomplete,
failed).
The type of error that caused the response to fail, corresponding
with the status field (completed, cancelled, incomplete,
failed).
RealtimeResponseUsage = object { input_token_details, input_tokens, output_token_details, 2 more } Usage statistics for the Response, this will correspond to billing. A
Realtime API session will maintain a conversation context and append new
Items to the Conversation, thus output from previous turns (text and
audio tokens) will become the input for later turns.
Usage statistics for the Response, this will correspond to billing. A Realtime API session will maintain a conversation context and append new Items to the Conversation, thus output from previous turns (text and audio tokens) will become the input for later turns.
Details about the input tokens used in the Response. Cached tokens are tokens from previous turns in the conversation that are included as context for the current response. Cached tokens here are counted as a subset of input tokens, meaning input tokens will include cached and uncached tokens.
The number of input tokens used in the Response, including text and audio tokens.
Details about the output tokens used in the Response.
The number of output tokens sent in the Response, including text and audio tokens.
The total number of tokens in the Response including input and output text and audio tokens.
RealtimeResponseUsageInputTokenDetails = object { audio_tokens, cached_tokens, cached_tokens_details, 2 more } Details about the input tokens used in the Response. Cached tokens are tokens from previous turns in the conversation that are included as context for the current response. Cached tokens here are counted as a subset of input tokens, meaning input tokens will include cached and uncached tokens.
Details about the input tokens used in the Response. Cached tokens are tokens from previous turns in the conversation that are included as context for the current response. Cached tokens here are counted as a subset of input tokens, meaning input tokens will include cached and uncached tokens.
The number of audio tokens used as input for the Response.
The number of cached tokens used as input for the Response.
cached_tokens_details: optional object { audio_tokens, image_tokens, text_tokens } Details about the cached tokens used as input for the Response.
Details about the cached tokens used as input for the Response.
The number of cached audio tokens used as input for the Response.
The number of cached image tokens used as input for the Response.
The number of cached text tokens used as input for the Response.
The number of image tokens used as input for the Response.
The number of text tokens used as input for the Response.
RealtimeResponseUsageOutputTokenDetails = object { audio_tokens, text_tokens } Details about the output tokens used in the Response.
Details about the output tokens used in the Response.
The number of audio tokens used in the Response.
The number of text tokens used in the Response.
RealtimeServerEvent = ConversationCreatedEvent { conversation, event_id, type } or ConversationItemCreatedEvent { event_id, item, type, previous_item_id } or ConversationItemDeletedEvent { event_id, item_id, type } or 43 moreA realtime server event.
A realtime server event.
ConversationCreatedEvent = object { conversation, event_id, type } Returned when a conversation is created. Emitted right after session creation.
Returned when a conversation is created. Emitted right after session creation.
conversation: object { id, object } The conversation resource.
The conversation resource.
The unique ID of the conversation.
The object type, must be realtime.conversation.
The unique ID of the server event.
The event type, must be conversation.created.
ConversationItemCreatedEvent = object { event_id, item, type, previous_item_id } Returned when a conversation item is created. There are several scenarios that produce this event:
- The server is generating a Response, which if successful will produce
either one or two Items, which will be of type
message
(role assistant) or type function_call.
- The input audio buffer has been committed, either by the client or the
server (in
server_vad mode). The server will take the content of the
input audio buffer and add it to a new user message Item.
- The client has sent a
conversation.item.create event to add a new Item
to the Conversation.
Returned when a conversation item is created. There are several scenarios that produce this event:
- The server is generating a Response, which if successful will produce
either one or two Items, which will be of type
message(roleassistant) or typefunction_call. - The input audio buffer has been committed, either by the client or the
server (in
server_vadmode). The server will take the content of the input audio buffer and add it to a new user message Item. - The client has sent a
conversation.item.createevent to add a new Item to the Conversation.
The unique ID of the server event.
A single item within a Realtime conversation.
The event type, must be conversation.item.created.
The ID of the preceding item in the Conversation context, allows the
client to understand the order of the conversation. Can be null if the
item has no predecessor.
ConversationItemDeletedEvent = object { event_id, item_id, type } Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server's understanding of the conversation history with the client's view.
Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server's understanding of the conversation history with the client's view.
The unique ID of the server event.
The ID of the item that was deleted.
The event type, must be conversation.item.deleted.
ConversationItemInputAudioTranscriptionCompletedEvent = object { content_index, event_id, item_id, 4 more } This event is the output of audio transcription for user audio written to the
user audio buffer. Transcription begins when the input audio buffer is
committed by the client or server (when VAD is enabled). Transcription runs
asynchronously with Response creation, so this event may come before or after
the Response events.
Realtime API models accept audio natively, and thus input transcription is a
separate process run on a separate ASR (Automatic Speech Recognition) model.
The transcript may diverge somewhat from the model's interpretation, and
should be treated as a rough guide.
This event is the output of audio transcription for user audio written to the user audio buffer. Transcription begins when the input audio buffer is committed by the client or server (when VAD is enabled). Transcription runs asynchronously with Response creation, so this event may come before or after the Response events.
Realtime API models accept audio natively, and thus input transcription is a separate process run on a separate ASR (Automatic Speech Recognition) model. The transcript may diverge somewhat from the model's interpretation, and should be treated as a rough guide.
The index of the content part containing the audio.
The unique ID of the server event.
The ID of the item containing the audio that is being transcribed.
The transcribed text.
The event type, must be
conversation.item.input_audio_transcription.completed.
usage: object { input_tokens, output_tokens, total_tokens, 2 more } or object { seconds, type } Usage statistics for the transcription, this is billed according to the ASR model's pricing rather than the realtime model's pricing.
Usage statistics for the transcription, this is billed according to the ASR model's pricing rather than the realtime model's pricing.
TokenUsage = object { input_tokens, output_tokens, total_tokens, 2 more } Usage statistics for models billed by token usage.
Usage statistics for models billed by token usage.
Number of input tokens billed for this request.
Number of output tokens generated.
Total number of tokens used (input + output).
The type of the usage object. Always tokens for this variant.
input_token_details: optional object { audio_tokens, text_tokens } Details about the input tokens billed for this request.
Details about the input tokens billed for this request.
Number of audio tokens billed for this request.
Number of text tokens billed for this request.
DurationUsage = object { seconds, type } Usage statistics for models billed by audio input duration.
Usage statistics for models billed by audio input duration.
Duration of the input audio in seconds.
The type of the usage object. Always duration for this variant.
The log probabilities of the transcription.
The log probabilities of the transcription.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
ConversationItemInputAudioTranscriptionDeltaEvent = object { event_id, item_id, type, 3 more } Returned when the text value of an input audio transcription content part is updated with incremental transcription results.
Returned when the text value of an input audio transcription content part is updated with incremental transcription results.
The unique ID of the server event.
The ID of the item containing the audio that is being transcribed.
The event type, must be conversation.item.input_audio_transcription.delta.
The index of the content part in the item's content array.
The text delta.
The log probabilities of the transcription. These can be enabled by configurating the session with "include": ["item.input_audio_transcription.logprobs"]. Each entry in the array corresponds a log probability of which token would be selected for this chunk of transcription. This can help to identify if it was possible there were multiple valid options for a given chunk of transcription.
The log probabilities of the transcription. These can be enabled by configurating the session with "include": ["item.input_audio_transcription.logprobs"]. Each entry in the array corresponds a log probability of which token would be selected for this chunk of transcription. This can help to identify if it was possible there were multiple valid options for a given chunk of transcription.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
ConversationItemInputAudioTranscriptionFailedEvent = object { content_index, error, event_id, 2 more } Returned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.
Returned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.
The index of the content part containing the audio.
error: object { code, message, param, type } Details of the transcription error.
Details of the transcription error.
Error code, if any.
A human-readable error message.
Parameter related to the error, if any.
The type of error.
The unique ID of the server event.
The ID of the user message item.
The event type, must be
conversation.item.input_audio_transcription.failed.
ConversationItemRetrieved = object { event_id, item, type } Returned when a conversation item is retrieved with conversation.item.retrieve. This is provided as a way to fetch the server's representation of an item, for example to get access to the post-processed audio data after noise cancellation and VAD. It includes the full content of the Item, including audio data.
Returned when a conversation item is retrieved with conversation.item.retrieve. This is provided as a way to fetch the server's representation of an item, for example to get access to the post-processed audio data after noise cancellation and VAD. It includes the full content of the Item, including audio data.
The unique ID of the server event.
A single item within a Realtime conversation.
The event type, must be conversation.item.retrieved.
ConversationItemTruncatedEvent = object { audio_end_ms, content_index, event_id, 2 more } Returned when an earlier assistant audio message item is truncated by the
client with a conversation.item.truncate event. This event is used to
synchronize the server's understanding of the audio with the client's playback.
This action will truncate the audio and remove the server-side text transcript
to ensure there is no text in the context that hasn't been heard by the user.
Returned when an earlier assistant audio message item is truncated by the
client with a conversation.item.truncate event. This event is used to
synchronize the server's understanding of the audio with the client's playback.
This action will truncate the audio and remove the server-side text transcript to ensure there is no text in the context that hasn't been heard by the user.
The duration up to which the audio was truncated, in milliseconds.
The index of the content part that was truncated.
The unique ID of the server event.
The ID of the assistant message item that was truncated.
The event type, must be conversation.item.truncated.
RealtimeErrorEvent = object { error, event_id, type } Returned when an error occurs, which could be a client problem or a server
problem. Most errors are recoverable and the session will stay open, we
recommend to implementors to monitor and log error messages by default.
Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.
Details of the error.
The unique ID of the server event.
The event type, must be error.
InputAudioBufferClearedEvent = object { event_id, type } Returned when the input audio buffer is cleared by the client with a
input_audio_buffer.clear event.
Returned when the input audio buffer is cleared by the client with a
input_audio_buffer.clear event.
The unique ID of the server event.
The event type, must be input_audio_buffer.cleared.
InputAudioBufferCommittedEvent = object { event_id, item_id, type, previous_item_id } Returned when an input audio buffer is committed, either by the client or
automatically in server VAD mode. The item_id property is the ID of the user
message item that will be created, thus a conversation.item.created event
will also be sent to the client.
Returned when an input audio buffer is committed, either by the client or
automatically in server VAD mode. The item_id property is the ID of the user
message item that will be created, thus a conversation.item.created event
will also be sent to the client.
The unique ID of the server event.
The ID of the user message item that will be created.
The event type, must be input_audio_buffer.committed.
The ID of the preceding item after which the new item will be inserted.
Can be null if the item has no predecessor.
InputAudioBufferDtmfEventReceivedEvent = object { event, received_at, type } SIP Only: Returned when an DTMF event is received. A DTMF event is a message that
represents a telephone keypad press (0–9, *, #, A–D). The event property
is the keypad that the user press. The received_at is the UTC Unix Timestamp
that the server received the event.
SIP Only: Returned when an DTMF event is received. A DTMF event is a message that
represents a telephone keypad press (0–9, *, #, A–D). The event property
is the keypad that the user press. The received_at is the UTC Unix Timestamp
that the server received the event.
The telephone keypad that was pressed by the user.
UTC Unix Timestamp when DTMF Event was received by server.
The event type, must be input_audio_buffer.dtmf_event_received.
InputAudioBufferSpeechStartedEvent = object { audio_start_ms, event_id, item_id, type } Sent by the server when in server_vad mode to indicate that speech has been
detected in the audio buffer. This can happen any time audio is added to the
buffer (unless speech is already detected). The client may want to use this
event to interrupt audio playback or provide visual feedback to the user.
The client should expect to receive a input_audio_buffer.speech_stopped event
when speech stops. The item_id property is the ID of the user message item
that will be created when speech stops and will also be included in the
input_audio_buffer.speech_stopped event (unless the client manually commits
the audio buffer during VAD activation).
Sent by the server when in server_vad mode to indicate that speech has been
detected in the audio buffer. This can happen any time audio is added to the
buffer (unless speech is already detected). The client may want to use this
event to interrupt audio playback or provide visual feedback to the user.
The client should expect to receive a input_audio_buffer.speech_stopped event
when speech stops. The item_id property is the ID of the user message item
that will be created when speech stops and will also be included in the
input_audio_buffer.speech_stopped event (unless the client manually commits
the audio buffer during VAD activation).
Milliseconds from the start of all audio written to the buffer during the
session when speech was first detected. This will correspond to the
beginning of audio sent to the model, and thus includes the
prefix_padding_ms configured in the Session.
The unique ID of the server event.
The ID of the user message item that will be created when speech stops.
The event type, must be input_audio_buffer.speech_started.
InputAudioBufferSpeechStoppedEvent = object { audio_end_ms, event_id, item_id, type } Returned in server_vad mode when the server detects the end of speech in
the audio buffer. The server will also send an conversation.item.created
event with the user message item that is created from the audio buffer.
Returned in server_vad mode when the server detects the end of speech in
the audio buffer. The server will also send an conversation.item.created
event with the user message item that is created from the audio buffer.
Milliseconds since the session started when speech stopped. This will
correspond to the end of audio sent to the model, and thus includes the
min_silence_duration_ms configured in the Session.
The unique ID of the server event.
The ID of the user message item that will be created.
The event type, must be input_audio_buffer.speech_stopped.
RateLimitsUpdatedEvent = object { event_id, rate_limits, type } Emitted at the beginning of a Response to indicate the updated rate limits.
When a Response is created some tokens will be "reserved" for the output
tokens, the rate limits shown here reflect that reservation, which is then
adjusted accordingly once the Response is completed.
Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be "reserved" for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.
The unique ID of the server event.
rate_limits: array of object { limit, name, remaining, reset_seconds } List of rate limit information.
List of rate limit information.
The maximum allowed value for the rate limit.
name: optional "requests" or "tokens"The name of the rate limit (requests, tokens).
The name of the rate limit (requests, tokens).
The remaining value before the limit is reached.
Seconds until the rate limit resets.
The event type, must be rate_limits.updated.
ResponseAudioDeltaEvent = object { content_index, delta, event_id, 4 more } Returned when the model-generated audio is updated.
Returned when the model-generated audio is updated.
The index of the content part in the item's content array.
Base64-encoded audio data delta.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_audio.delta.
ResponseAudioDoneEvent = object { content_index, event_id, item_id, 3 more } Returned when the model-generated audio is done. Also emitted when a Response
is interrupted, incomplete, or cancelled.
Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_audio.done.
ResponseAudioTranscriptDeltaEvent = object { content_index, delta, event_id, 4 more } Returned when the model-generated transcription of audio output is updated.
Returned when the model-generated transcription of audio output is updated.
The index of the content part in the item's content array.
The transcript delta.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_audio_transcript.delta.
ResponseAudioTranscriptDoneEvent = object { content_index, event_id, item_id, 4 more } Returned when the model-generated transcription of audio output is done
streaming. Also emitted when a Response is interrupted, incomplete, or
cancelled.
Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The final transcript of the audio.
The event type, must be response.output_audio_transcript.done.
ResponseContentPartAddedEvent = object { content_index, event_id, item_id, 4 more } Returned when a new content part is added to an assistant message item during
response generation.
Returned when a new content part is added to an assistant message item during response generation.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item to which the content part was added.
The index of the output item in the response.
part: object { audio, text, transcript, type } The content part that was added.
The content part that was added.
Base64-encoded audio data (if type is "audio").
The text content (if type is "text").
The transcript of the audio (if type is "audio").
type: optional "audio" or "text"The content type ("text", "audio").
The content type ("text", "audio").
The ID of the response.
The event type, must be response.content_part.added.
ResponseContentPartDoneEvent = object { content_index, event_id, item_id, 4 more } Returned when a content part is done streaming in an assistant message item.
Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
part: object { audio, text, transcript, type } The content part that is done.
The content part that is done.
Base64-encoded audio data (if type is "audio").
The text content (if type is "text").
The transcript of the audio (if type is "audio").
type: optional "audio" or "text"The content type ("text", "audio").
The content type ("text", "audio").
The ID of the response.
The event type, must be response.content_part.done.
ResponseCreatedEvent = object { event_id, response, type } Returned when a new Response is created. The first event of response creation,
where the response is in an initial state of in_progress.
Returned when a new Response is created. The first event of response creation,
where the response is in an initial state of in_progress.
The unique ID of the server event.
The response resource.
The event type, must be response.created.
ResponseDoneEvent = object { event_id, response, type } Returned when a Response is done streaming. Always emitted, no matter the
final state. The Response object included in the response.done event will
include all output Items in the Response but will omit the raw audio data.
Clients should check the status field of the Response to determine if it was successful
(completed) or if there was another outcome: cancelled, failed, or incomplete.
A response will contain all output items that were generated during the response, excluding
any audio content.
Returned when a Response is done streaming. Always emitted, no matter the
final state. The Response object included in the response.done event will
include all output Items in the Response but will omit the raw audio data.
Clients should check the status field of the Response to determine if it was successful
(completed) or if there was another outcome: cancelled, failed, or incomplete.
A response will contain all output items that were generated during the response, excluding any audio content.
The unique ID of the server event.
The response resource.
The event type, must be response.done.
ResponseFunctionCallArgumentsDeltaEvent = object { call_id, delta, event_id, 4 more } Returned when the model-generated function call arguments are updated.
Returned when the model-generated function call arguments are updated.
The ID of the function call.
The arguments delta as a JSON string.
The unique ID of the server event.
The ID of the function call item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.function_call_arguments.delta.
ResponseFunctionCallArgumentsDoneEvent = object { arguments, call_id, event_id, 5 more } Returned when the model-generated function call arguments are done streaming.
Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The final arguments as a JSON string.
The ID of the function call.
The unique ID of the server event.
The ID of the function call item.
The name of the function that was called.
The index of the output item in the response.
The ID of the response.
The event type, must be response.function_call_arguments.done.
ResponseOutputItemAddedEvent = object { event_id, item, output_index, 2 more } Returned when a new Item is created during Response generation.
Returned when a new Item is created during Response generation.
The unique ID of the server event.
A single item within a Realtime conversation.
The index of the output item in the Response.
The ID of the Response to which the item belongs.
The event type, must be response.output_item.added.
ResponseOutputItemDoneEvent = object { event_id, item, output_index, 2 more } Returned when an Item is done streaming. Also emitted when a Response is
interrupted, incomplete, or cancelled.
Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The unique ID of the server event.
A single item within a Realtime conversation.
The index of the output item in the Response.
The ID of the Response to which the item belongs.
The event type, must be response.output_item.done.
ResponseTextDeltaEvent = object { content_index, delta, event_id, 4 more } Returned when the text value of an "output_text" content part is updated.
Returned when the text value of an "output_text" content part is updated.
The index of the content part in the item's content array.
The text delta.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_text.delta.
ResponseTextDoneEvent = object { content_index, event_id, item_id, 4 more } Returned when the text value of an "output_text" content part is done streaming. Also
emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the text value of an "output_text" content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The final text content.
The event type, must be response.output_text.done.
SessionCreatedEvent = object { event_id, session, type } Returned when a Session is created. Emitted automatically when a new
connection is established as the first server event. This event will contain
the default Session configuration.
Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.
The unique ID of the server event.
session: RealtimeSessionCreateRequest { type, audio, include, 9 more } or RealtimeTranscriptionSessionCreateRequest { type, audio, include } The session configuration.
The session configuration.
RealtimeSessionCreateRequest = object { type, audio, include, 9 more } Realtime session object configuration.
Realtime session object configuration.
The type of session to create. Always realtime for the Realtime API.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
Tools available to the model.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeTranscriptionSessionCreateRequest = object { type, audio, include } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The type of session to create. Always transcription for transcription sessions.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The event type, must be session.created.
SessionUpdatedEvent = object { event_id, session, type } Returned when a session is updated with a session.update event, unless
there is an error.
Returned when a session is updated with a session.update event, unless
there is an error.
The unique ID of the server event.
session: RealtimeSessionCreateRequest { type, audio, include, 9 more } or RealtimeTranscriptionSessionCreateRequest { type, audio, include } The session configuration.
The session configuration.
RealtimeSessionCreateRequest = object { type, audio, include, 9 more } Realtime session object configuration.
Realtime session object configuration.
The type of session to create. Always realtime for the Realtime API.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
Tools available to the model.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeTranscriptionSessionCreateRequest = object { type, audio, include } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The type of session to create. Always transcription for transcription sessions.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The event type, must be session.updated.
OutputAudioBufferStarted = object { event_id, response_id, type } WebRTC/SIP Only: Emitted when the server begins streaming audio to the client. This event is
emitted after an audio content part has been added (response.content_part.added)
to the response.
Learn more.
WebRTC/SIP Only: Emitted when the server begins streaming audio to the client. This event is
emitted after an audio content part has been added (response.content_part.added)
to the response.
Learn more.
The unique ID of the server event.
The unique ID of the response that produced the audio.
The event type, must be output_audio_buffer.started.
OutputAudioBufferStopped = object { event_id, response_id, type } WebRTC/SIP Only: Emitted when the output audio buffer has been completely drained on the server,
and no more audio is forthcoming. This event is emitted after the full response
data has been sent to the client (response.done).
Learn more.
WebRTC/SIP Only: Emitted when the output audio buffer has been completely drained on the server,
and no more audio is forthcoming. This event is emitted after the full response
data has been sent to the client (response.done).
Learn more.
The unique ID of the server event.
The unique ID of the response that produced the audio.
The event type, must be output_audio_buffer.stopped.
OutputAudioBufferCleared = object { event_id, response_id, type } WebRTC/SIP Only: Emitted when the output audio buffer is cleared. This happens either in VAD
mode when the user has interrupted (input_audio_buffer.speech_started),
or when the client has emitted the output_audio_buffer.clear event to manually
cut off the current audio response.
Learn more.
WebRTC/SIP Only: Emitted when the output audio buffer is cleared. This happens either in VAD
mode when the user has interrupted (input_audio_buffer.speech_started),
or when the client has emitted the output_audio_buffer.clear event to manually
cut off the current audio response.
Learn more.
The unique ID of the server event.
The unique ID of the response that produced the audio.
The event type, must be output_audio_buffer.cleared.
ConversationItemAdded = object { event_id, item, type, previous_item_id } Sent by the server when an Item is added to the default Conversation. This can happen in several cases:
- When the client sends a
conversation.item.create event.
- When the input audio buffer is committed. In this case the item will be a user message containing the audio from the buffer.
- When the model is generating a Response. In this case the
conversation.item.added event will be sent when the model starts generating a specific Item, and thus it will not yet have any content (and status will be in_progress).
The event will include the full content of the Item (except when model is generating a Response) except for audio data, which can be retrieved separately with a conversation.item.retrieve event if necessary.
Sent by the server when an Item is added to the default Conversation. This can happen in several cases:
- When the client sends a
conversation.item.createevent. - When the input audio buffer is committed. In this case the item will be a user message containing the audio from the buffer.
- When the model is generating a Response. In this case the
conversation.item.addedevent will be sent when the model starts generating a specific Item, and thus it will not yet have any content (andstatuswill bein_progress).
The event will include the full content of the Item (except when model is generating a Response) except for audio data, which can be retrieved separately with a conversation.item.retrieve event if necessary.
The unique ID of the server event.
A single item within a Realtime conversation.
The event type, must be conversation.item.added.
The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.
ConversationItemDone = object { event_id, item, type, previous_item_id } Returned when a conversation item is finalized.
The event will include the full content of the Item except for audio data, which can be retrieved separately with a conversation.item.retrieve event if needed.
Returned when a conversation item is finalized.
The event will include the full content of the Item except for audio data, which can be retrieved separately with a conversation.item.retrieve event if needed.
The unique ID of the server event.
A single item within a Realtime conversation.
The event type, must be conversation.item.done.
The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.
InputAudioBufferTimeoutTriggered = object { audio_end_ms, audio_start_ms, event_id, 2 more } Returned when the Server VAD timeout is triggered for the input audio buffer. This is configured
with idle_timeout_ms in the turn_detection settings of the session, and it indicates that
there hasn't been any speech detected for the configured duration.
The audio_start_ms and audio_end_ms fields indicate the segment of audio after the last
model response up to the triggering time, as an offset from the beginning of audio written
to the input audio buffer. This means it demarcates the segment of audio that was silent and
the difference between the start and end values will roughly match the configured timeout.
The empty audio will be committed to the conversation as an input_audio item (there will be a
input_audio_buffer.committed event) and a model response will be generated. There may be speech
that didn't trigger VAD but is still detected by the model, so the model may respond with
something relevant to the conversation or a prompt to continue speaking.
Returned when the Server VAD timeout is triggered for the input audio buffer. This is configured
with idle_timeout_ms in the turn_detection settings of the session, and it indicates that
there hasn't been any speech detected for the configured duration.
The audio_start_ms and audio_end_ms fields indicate the segment of audio after the last
model response up to the triggering time, as an offset from the beginning of audio written
to the input audio buffer. This means it demarcates the segment of audio that was silent and
the difference between the start and end values will roughly match the configured timeout.
The empty audio will be committed to the conversation as an input_audio item (there will be a
input_audio_buffer.committed event) and a model response will be generated. There may be speech
that didn't trigger VAD but is still detected by the model, so the model may respond with
something relevant to the conversation or a prompt to continue speaking.
Millisecond offset of audio written to the input audio buffer at the time the timeout was triggered.
Millisecond offset of audio written to the input audio buffer that was after the playback time of the last model response.
The unique ID of the server event.
The ID of the item associated with this segment.
The event type, must be input_audio_buffer.timeout_triggered.
ConversationItemInputAudioTranscriptionSegment = object { id, content_index, end, 6 more } Returned when an input audio transcription segment is identified for an item.
Returned when an input audio transcription segment is identified for an item.
The segment identifier.
The index of the input audio content part within the item.
End time of the segment in seconds.
The unique ID of the server event.
The ID of the item containing the input audio content.
The detected speaker label for this segment.
Start time of the segment in seconds.
The text for this segment.
The event type, must be conversation.item.input_audio_transcription.segment.
McpListToolsInProgress = object { event_id, item_id, type } Returned when listing MCP tools is in progress for an item.
Returned when listing MCP tools is in progress for an item.
The unique ID of the server event.
The ID of the MCP list tools item.
The event type, must be mcp_list_tools.in_progress.
McpListToolsCompleted = object { event_id, item_id, type } Returned when listing MCP tools has completed for an item.
Returned when listing MCP tools has completed for an item.
The unique ID of the server event.
The ID of the MCP list tools item.
The event type, must be mcp_list_tools.completed.
McpListToolsFailed = object { event_id, item_id, type } Returned when listing MCP tools has failed for an item.
Returned when listing MCP tools has failed for an item.
The unique ID of the server event.
The ID of the MCP list tools item.
The event type, must be mcp_list_tools.failed.
ResponseMcpCallArgumentsDelta = object { delta, event_id, item_id, 4 more } Returned when MCP tool call arguments are updated during response generation.
Returned when MCP tool call arguments are updated during response generation.
The JSON-encoded arguments delta.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.mcp_call_arguments.delta.
If present, indicates the delta text was obfuscated.
ResponseMcpCallArgumentsDone = object { arguments, event_id, item_id, 3 more } Returned when MCP tool call arguments are finalized during response generation.
Returned when MCP tool call arguments are finalized during response generation.
The final JSON-encoded arguments string.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.mcp_call_arguments.done.
ResponseMcpCallInProgress = object { event_id, item_id, output_index, type } Returned when an MCP tool call has started and is in progress.
Returned when an MCP tool call has started and is in progress.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The event type, must be response.mcp_call.in_progress.
ResponseMcpCallCompleted = object { event_id, item_id, output_index, type } Returned when an MCP tool call has completed successfully.
Returned when an MCP tool call has completed successfully.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The event type, must be response.mcp_call.completed.
ResponseMcpCallFailed = object { event_id, item_id, output_index, type } Returned when an MCP tool call has failed.
Returned when an MCP tool call has failed.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The event type, must be response.mcp_call.failed.
RealtimeSession = object { id, expires_at, include, 17 more } Realtime session object for the beta interface.
Realtime session object for the beta interface.
Unique identifier for the session that looks like sess_1234567890abcdef.
Expiration timestamp for the session, in seconds since epoch.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
input_audio_format: optional "pcm16" or "g711_ulaw" or "g711_alaw"The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.
For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate,
single channel (mono), and little-endian byte order.
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.
For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate,
single channel (mono), and little-endian byte order.
input_audio_noise_reduction: optional object { type } Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this
field is not set and are visible in the session.created event at the
start of the session.
max_response_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
modalities: optional array of "text" or "audio"The set of modalities the model can respond with. To disable audio,
set this to ["text"].
The set of modalities the model can respond with. To disable audio, set this to ["text"].
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
The object type. Always realtime.session.
output_audio_format: optional "pcm16" or "g711_ulaw" or "g711_alaw"The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.
For pcm16, output audio is sampled at a rate of 24kHz.
The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.
For pcm16, output audio is sampled at a rate of 24kHz.
Reference to a prompt template and its variables. Learn more.
The speed of the model's spoken response. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.
Sampling temperature for the model, limited to [0.6, 1.2]. For audio models a temperature of 0.8 is highly recommended for best performance.
How the model chooses tools. Options are auto, none, required, or
specify a function.
Tools (functions) available to the model.
Tools (functions) available to the model.
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).
The name of the function.
Parameters of the function in JSON Schema.
The type of the tool, i.e. function.
tracing: optional "auto" or object { group_id, metadata, workflow_name } Configuration options for tracing. Set to null to disable tracing. Once
tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
Default tracing mode for the session.
TracingConfiguration = object { group_id, metadata, workflow_name } Granular configuration for tracing.
Granular configuration for tracing.
The group id to attach to this trace to enable filtering and grouping in the traces dashboard.
The arbitrary metadata to attach to this trace to enable filtering in the traces dashboard.
The name of the workflow to attach to this trace. This is used to name the trace in the traces dashboard.
turn_detection: optional object { type, create_response, idle_timeout_ms, 4 more } or object { type, create_response, eagerness, interrupt_response } Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
ServerVad = object { type, create_response, idle_timeout_ms, 4 more } Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Type of turn detection, server_vad to turn on simple Server VAD.
Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.
The timeout value will be applied after the last model response's audio has finished playing,
i.e. it's set to the response.done time plus audio playback duration.
An input_audio_buffer.timeout_triggered event (plus events
associated with the Response) will be emitted when the timeout is reached.
Idle timeout is currently only supported for server_vad mode.
Whether or not to automatically interrupt (cancel) any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in
milliseconds). Defaults to 300ms.
Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults
to 500ms. With shorter values the model will respond more quickly,
but may jump in on short pauses from the user.
Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A
higher threshold will require louder audio to activate the model, and
thus might perform better in noisy environments.
SemanticVad = object { type, create_response, eagerness, interrupt_response } Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Type of turn detection, semantic_vad to turn on Semantic VAD.
Whether or not to automatically generate a response when a VAD stop event occurs.
eagerness: optional "low" or "medium" or "high" or "auto"Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Whether or not to automatically interrupt any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs.
voice: optional string or "alloy" or "ash" or "ballad" or 7 moreThe voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, and verse.
The voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, and verse.
UnionMember1 = "alloy" or "ash" or "ballad" or 7 moreThe voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, and verse.
The voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, and verse.
RealtimeSessionCreateRequest = object { type, audio, include, 9 more } Realtime session object configuration.
Realtime session object configuration.
The type of session to create. Always realtime for the Realtime API.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
Tools available to the model.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeToolChoiceConfig = ToolChoiceOptions or ToolChoiceFunction { name, type } or ToolChoiceMcp { server_label, type, name } How the model chooses tools. Provide one of the string modes or force a specific
function/MCP tool.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
ToolChoiceOptions = "none" or "auto" or "required"Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
ToolChoiceFunction = object { name, type } Use this option to force the model to call a specific function.
Use this option to force the model to call a specific function.
The name of the function to call.
For function calling, the type is always function.
ToolChoiceMcp = object { server_label, type, name } Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific tool on a remote MCP server.
The label of the MCP server to use.
For MCP tools, the type is always mcp.
The name of the tool to call on the server.
Tools available to the model.
Tools available to the model.
RealtimeFunctionTool = object { description, name, parameters, type }
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).
The name of the function.
Parameters of the function in JSON Schema.
The type of the tool, i.e. function.
McpTool = object { server_label, type, allowed_tools, 6 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
A label for this MCP server, used to identify it in tool calls.
The type of the MCP tool. Always mcp.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
A string array of allowed tool names
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
McpToolApprovalSetting = "always" or "never"Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Optional description of the MCP server, used to provide more context.
The URL for the MCP server. One of server_url or connector_id must be
provided.
RealtimeToolsConfigUnion = RealtimeFunctionTool { description, name, parameters, type } or object { server_label, type, allowed_tools, 6 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
RealtimeFunctionTool = object { description, name, parameters, type }
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).
The name of the function.
Parameters of the function in JSON Schema.
The type of the tool, i.e. function.
McpTool = object { server_label, type, allowed_tools, 6 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
A label for this MCP server, used to identify it in tool calls.
The type of the MCP tool. Always mcp.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
A string array of allowed tool names
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
McpToolApprovalSetting = "always" or "never"Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Optional description of the MCP server, used to provide more context.
The URL for the MCP server. One of server_url or connector_id must be
provided.
RealtimeTracingConfig = "auto" or object { group_id, metadata, workflow_name } Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once
tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
Enables tracing and sets default values for tracing configuration options. Always auto.
TracingConfiguration = object { group_id, metadata, workflow_name } Granular configuration for tracing.
Granular configuration for tracing.
The group id to attach to this trace to enable filtering and grouping in the Traces Dashboard.
The arbitrary metadata to attach to this trace to enable filtering in the Traces Dashboard.
The name of the workflow to attach to this trace. This is used to name the trace in the Traces Dashboard.
RealtimeTranscriptionSessionAudio = object { input } Configuration for input and output audio.
Configuration for input and output audio.
RealtimeTranscriptionSessionAudioInput = object { format, noise_reduction, transcription, turn_detection }
The PCM audio format. Only a 24kHz sample rate is supported.
noise_reduction: optional object { type } Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
RealtimeTranscriptionSessionAudioInputTurnDetection = object { type, create_response, idle_timeout_ms, 4 more } or object { type, create_response, eagerness, interrupt_response } Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
ServerVad = object { type, create_response, idle_timeout_ms, 4 more } Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Type of turn detection, server_vad to turn on simple Server VAD.
Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.
The timeout value will be applied after the last model response's audio has finished playing,
i.e. it's set to the response.done time plus audio playback duration.
An input_audio_buffer.timeout_triggered event (plus events
associated with the Response) will be emitted when the timeout is reached.
Idle timeout is currently only supported for server_vad mode.
Whether or not to automatically interrupt (cancel) any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in
milliseconds). Defaults to 300ms.
Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults
to 500ms. With shorter values the model will respond more quickly,
but may jump in on short pauses from the user.
Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A
higher threshold will require louder audio to activate the model, and
thus might perform better in noisy environments.
SemanticVad = object { type, create_response, eagerness, interrupt_response } Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Type of turn detection, semantic_vad to turn on Semantic VAD.
Whether or not to automatically generate a response when a VAD stop event occurs.
eagerness: optional "low" or "medium" or "high" or "auto"Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Whether or not to automatically interrupt any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs.
RealtimeTranscriptionSessionCreateRequest = object { type, audio, include } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The type of session to create. Always transcription for transcription sessions.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
RealtimeTruncation = "auto" or "disabled" or object { retention_ratio, type, token_limits } When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
UnionMember0 = "auto" or "disabled"The truncation strategy to use for the session. auto is the default truncation strategy. disabled will disable truncation and emit errors when the conversation exceeds the input token limit.
The truncation strategy to use for the session. auto is the default truncation strategy. disabled will disable truncation and emit errors when the conversation exceeds the input token limit.
RetentionRatioTruncation = object { retention_ratio, type, token_limits } Retain a fraction of the conversation tokens when the conversation exceeds the input token limit. This allows you to amortize truncations across multiple turns, which can help improve cached token usage.
Retain a fraction of the conversation tokens when the conversation exceeds the input token limit. This allows you to amortize truncations across multiple turns, which can help improve cached token usage.
Fraction of post-instruction conversation tokens to retain (0.0 - 1.0) when the conversation exceeds the input token limit. Setting this to 0.8 means that messages will be dropped until 80% of the maximum allowed tokens are used. This helps reduce the frequency of truncations and improve cache rates.
Use retention ratio truncation.
token_limits: optional object { post_instructions } Optional custom token limits for this truncation strategy. If not provided, the model's default token limits will be used.
Optional custom token limits for this truncation strategy. If not provided, the model's default token limits will be used.
Maximum tokens allowed in the conversation after instructions (which including tool definitions). For example, setting this to 5,000 would mean that truncation would occur when the conversation exceeds 5,000 tokens after instructions. This cannot be higher than the model's context window size minus the maximum output tokens.
ResponseAudioDeltaEvent = object { content_index, delta, event_id, 4 more } Returned when the model-generated audio is updated.
Returned when the model-generated audio is updated.
The index of the content part in the item's content array.
Base64-encoded audio data delta.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_audio.delta.
ResponseAudioDoneEvent = object { content_index, event_id, item_id, 3 more } Returned when the model-generated audio is done. Also emitted when a Response
is interrupted, incomplete, or cancelled.
Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_audio.done.
ResponseAudioTranscriptDeltaEvent = object { content_index, delta, event_id, 4 more } Returned when the model-generated transcription of audio output is updated.
Returned when the model-generated transcription of audio output is updated.
The index of the content part in the item's content array.
The transcript delta.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_audio_transcript.delta.
ResponseAudioTranscriptDoneEvent = object { content_index, event_id, item_id, 4 more } Returned when the model-generated transcription of audio output is done
streaming. Also emitted when a Response is interrupted, incomplete, or
cancelled.
Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The final transcript of the audio.
The event type, must be response.output_audio_transcript.done.
ResponseCancelEvent = object { type, event_id, response_id } Send this event to cancel an in-progress response. The server will respond
with a response.done event with a status of response.status=cancelled. If
there is no response to cancel, the server will respond with an error. It's safe
to call response.cancel even if no response is in progress, an error will be
returned the session will remain unaffected.
Send this event to cancel an in-progress response. The server will respond
with a response.done event with a status of response.status=cancelled. If
there is no response to cancel, the server will respond with an error. It's safe
to call response.cancel even if no response is in progress, an error will be
returned the session will remain unaffected.
The event type, must be response.cancel.
Optional client-generated ID used to identify this event.
A specific response ID to cancel - if not provided, will cancel an in-progress response in the default conversation.
ResponseContentPartAddedEvent = object { content_index, event_id, item_id, 4 more } Returned when a new content part is added to an assistant message item during
response generation.
Returned when a new content part is added to an assistant message item during response generation.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item to which the content part was added.
The index of the output item in the response.
part: object { audio, text, transcript, type } The content part that was added.
The content part that was added.
Base64-encoded audio data (if type is "audio").
The text content (if type is "text").
The transcript of the audio (if type is "audio").
type: optional "audio" or "text"The content type ("text", "audio").
The content type ("text", "audio").
The ID of the response.
The event type, must be response.content_part.added.
ResponseContentPartDoneEvent = object { content_index, event_id, item_id, 4 more } Returned when a content part is done streaming in an assistant message item.
Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
part: object { audio, text, transcript, type } The content part that is done.
The content part that is done.
Base64-encoded audio data (if type is "audio").
The text content (if type is "text").
The transcript of the audio (if type is "audio").
type: optional "audio" or "text"The content type ("text", "audio").
The content type ("text", "audio").
The ID of the response.
The event type, must be response.content_part.done.
ResponseCreateEvent = object { type, event_id, response } This event instructs the server to create a Response, which means triggering
model inference. When in Server VAD mode, the server will create Responses
automatically.
A Response will include at least one Item, and may have two, in which case
the second will be a function call. These Items will be appended to the
conversation history by default.
The server will respond with a response.created event, events for Items
and content created, and finally a response.done event to indicate the
Response is complete.
The response.create event includes inference configuration like
instructions and tools. If these are set, they will override the Session's
configuration for this Response only.
Responses can be created out-of-band of the default Conversation, meaning that they can
have arbitrary input, and it's possible to disable writing the output to the Conversation.
Only one Response can write to the default Conversation at a time, but otherwise multiple
Responses can be created in parallel. The metadata field is a good way to disambiguate
multiple simultaneous Responses.
Clients can set conversation to none to create a Response that does not write to the default
Conversation. Arbitrary input can be provided with the input field, which is an array accepting
raw Items and references to existing Items.
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history by default.
The server will respond with a response.created event, events for Items
and content created, and finally a response.done event to indicate the
Response is complete.
The response.create event includes inference configuration like
instructions and tools. If these are set, they will override the Session's
configuration for this Response only.
Responses can be created out-of-band of the default Conversation, meaning that they can
have arbitrary input, and it's possible to disable writing the output to the Conversation.
Only one Response can write to the default Conversation at a time, but otherwise multiple
Responses can be created in parallel. The metadata field is a good way to disambiguate
multiple simultaneous Responses.
Clients can set conversation to none to create a Response that does not write to the default
Conversation. Arbitrary input can be provided with the input field, which is an array accepting
raw Items and references to existing Items.
The event type, must be response.create.
Optional client-generated ID used to identify this event.
Create a new Realtime response with these parameters
ResponseCreatedEvent = object { event_id, response, type } Returned when a new Response is created. The first event of response creation,
where the response is in an initial state of in_progress.
Returned when a new Response is created. The first event of response creation,
where the response is in an initial state of in_progress.
The unique ID of the server event.
The response resource.
The event type, must be response.created.
ResponseDoneEvent = object { event_id, response, type } Returned when a Response is done streaming. Always emitted, no matter the
final state. The Response object included in the response.done event will
include all output Items in the Response but will omit the raw audio data.
Clients should check the status field of the Response to determine if it was successful
(completed) or if there was another outcome: cancelled, failed, or incomplete.
A response will contain all output items that were generated during the response, excluding
any audio content.
Returned when a Response is done streaming. Always emitted, no matter the
final state. The Response object included in the response.done event will
include all output Items in the Response but will omit the raw audio data.
Clients should check the status field of the Response to determine if it was successful
(completed) or if there was another outcome: cancelled, failed, or incomplete.
A response will contain all output items that were generated during the response, excluding any audio content.
The unique ID of the server event.
The response resource.
The event type, must be response.done.
ResponseFunctionCallArgumentsDeltaEvent = object { call_id, delta, event_id, 4 more } Returned when the model-generated function call arguments are updated.
Returned when the model-generated function call arguments are updated.
The ID of the function call.
The arguments delta as a JSON string.
The unique ID of the server event.
The ID of the function call item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.function_call_arguments.delta.
ResponseFunctionCallArgumentsDoneEvent = object { arguments, call_id, event_id, 5 more } Returned when the model-generated function call arguments are done streaming.
Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The final arguments as a JSON string.
The ID of the function call.
The unique ID of the server event.
The ID of the function call item.
The name of the function that was called.
The index of the output item in the response.
The ID of the response.
The event type, must be response.function_call_arguments.done.
ResponseMcpCallArgumentsDelta = object { delta, event_id, item_id, 4 more } Returned when MCP tool call arguments are updated during response generation.
Returned when MCP tool call arguments are updated during response generation.
The JSON-encoded arguments delta.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.mcp_call_arguments.delta.
If present, indicates the delta text was obfuscated.
ResponseMcpCallArgumentsDone = object { arguments, event_id, item_id, 3 more } Returned when MCP tool call arguments are finalized during response generation.
Returned when MCP tool call arguments are finalized during response generation.
The final JSON-encoded arguments string.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.mcp_call_arguments.done.
ResponseMcpCallCompleted = object { event_id, item_id, output_index, type } Returned when an MCP tool call has completed successfully.
Returned when an MCP tool call has completed successfully.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The event type, must be response.mcp_call.completed.
ResponseMcpCallFailed = object { event_id, item_id, output_index, type } Returned when an MCP tool call has failed.
Returned when an MCP tool call has failed.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The event type, must be response.mcp_call.failed.
ResponseMcpCallInProgress = object { event_id, item_id, output_index, type } Returned when an MCP tool call has started and is in progress.
Returned when an MCP tool call has started and is in progress.
The unique ID of the server event.
The ID of the MCP tool call item.
The index of the output item in the response.
The event type, must be response.mcp_call.in_progress.
ResponseOutputItemAddedEvent = object { event_id, item, output_index, 2 more } Returned when a new Item is created during Response generation.
Returned when a new Item is created during Response generation.
The unique ID of the server event.
A single item within a Realtime conversation.
The index of the output item in the Response.
The ID of the Response to which the item belongs.
The event type, must be response.output_item.added.
ResponseOutputItemDoneEvent = object { event_id, item, output_index, 2 more } Returned when an Item is done streaming. Also emitted when a Response is
interrupted, incomplete, or cancelled.
Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The unique ID of the server event.
A single item within a Realtime conversation.
The index of the output item in the Response.
The ID of the Response to which the item belongs.
The event type, must be response.output_item.done.
ResponseTextDeltaEvent = object { content_index, delta, event_id, 4 more } Returned when the text value of an "output_text" content part is updated.
Returned when the text value of an "output_text" content part is updated.
The index of the content part in the item's content array.
The text delta.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The event type, must be response.output_text.delta.
ResponseTextDoneEvent = object { content_index, event_id, item_id, 4 more } Returned when the text value of an "output_text" content part is done streaming. Also
emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the text value of an "output_text" content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
The index of the content part in the item's content array.
The unique ID of the server event.
The ID of the item.
The index of the output item in the response.
The ID of the response.
The final text content.
The event type, must be response.output_text.done.
SessionCreatedEvent = object { event_id, session, type } Returned when a Session is created. Emitted automatically when a new
connection is established as the first server event. This event will contain
the default Session configuration.
Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.
The unique ID of the server event.
session: RealtimeSessionCreateRequest { type, audio, include, 9 more } or RealtimeTranscriptionSessionCreateRequest { type, audio, include } The session configuration.
The session configuration.
RealtimeSessionCreateRequest = object { type, audio, include, 9 more } Realtime session object configuration.
Realtime session object configuration.
The type of session to create. Always realtime for the Realtime API.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
Tools available to the model.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeTranscriptionSessionCreateRequest = object { type, audio, include } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The type of session to create. Always transcription for transcription sessions.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The event type, must be session.created.
SessionUpdateEvent = object { session, type, event_id } Send this event to update the session’s configuration.
The client may send this event at any time to update any field
except for voice and model. voice can be updated only if there have been no other audio outputs yet.
When the server receives a session.update, it will respond
with a session.updated event showing the full, effective configuration.
Only the fields that are present in the session.update are updated. To clear a field like
instructions, pass an empty string. To clear a field like tools, pass an empty array.
To clear a field like turn_detection, pass null.
Send this event to update the session’s configuration.
The client may send this event at any time to update any field
except for voice and model. voice can be updated only if there have been no other audio outputs yet.
When the server receives a session.update, it will respond
with a session.updated event showing the full, effective configuration.
Only the fields that are present in the session.update are updated. To clear a field like
instructions, pass an empty string. To clear a field like tools, pass an empty array.
To clear a field like turn_detection, pass null.
session: RealtimeSessionCreateRequest { type, audio, include, 9 more } or RealtimeTranscriptionSessionCreateRequest { type, audio, include } Update the Realtime session. Choose either a realtime
session or a transcription session.
Update the Realtime session. Choose either a realtime session or a transcription session.
RealtimeSessionCreateRequest = object { type, audio, include, 9 more } Realtime session object configuration.
Realtime session object configuration.
The type of session to create. Always realtime for the Realtime API.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
Tools available to the model.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeTranscriptionSessionCreateRequest = object { type, audio, include } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The type of session to create. Always transcription for transcription sessions.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The event type, must be session.update.
Optional client-generated ID used to identify this event. This is an arbitrary string that a client may assign. It will be passed back if there is an error with the event, but the corresponding session.updated event will not include it.
SessionUpdatedEvent = object { event_id, session, type } Returned when a session is updated with a session.update event, unless
there is an error.
Returned when a session is updated with a session.update event, unless
there is an error.
The unique ID of the server event.
session: RealtimeSessionCreateRequest { type, audio, include, 9 more } or RealtimeTranscriptionSessionCreateRequest { type, audio, include } The session configuration.
The session configuration.
RealtimeSessionCreateRequest = object { type, audio, include, 9 more } Realtime session object configuration.
Realtime session object configuration.
The type of session to create. Always realtime for the Realtime API.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
Tools available to the model.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeTranscriptionSessionCreateRequest = object { type, audio, include } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The type of session to create. Always transcription for transcription sessions.
Configuration for input and output audio.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The event type, must be session.updated.
TranscriptionSessionUpdate = object { session, type, event_id } Send this event to update a transcription session.
Send this event to update a transcription session.
session: object { include, input_audio_format, input_audio_noise_reduction, 2 more } Realtime transcription session object configuration.
Realtime transcription session object configuration.
The set of items to include in the transcription. Current available items are:
item.input_audio_transcription.logprobs
input_audio_format: optional "pcm16" or "g711_ulaw" or "g711_alaw"The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.
For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate,
single channel (mono), and little-endian byte order.
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.
For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate,
single channel (mono), and little-endian byte order.
input_audio_noise_reduction: optional object { type } Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
Configuration for input audio transcription. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
turn_detection: optional object { prefix_padding_ms, silence_duration_ms, threshold, type } Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.
Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.
Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.
Type of turn detection. Only server_vad is currently supported for transcription sessions.
The event type, must be transcription_session.update.
Optional client-generated ID used to identify this event.
TranscriptionSessionUpdatedEvent = object { event_id, session, type } Returned when a transcription session is updated with a transcription_session.update event, unless
there is an error.
Returned when a transcription session is updated with a transcription_session.update event, unless
there is an error.
The unique ID of the server event.
session: object { client_secret, input_audio_format, input_audio_transcription, 2 more } A new Realtime transcription session configuration.
When a session is created on the server via REST API, the session object
also contains an ephemeral key. Default TTL for keys is 10 minutes. This
property is not present when a session is updated via the WebSocket API.
A new Realtime transcription session configuration.
When a session is created on the server via REST API, the session object also contains an ephemeral key. Default TTL for keys is 10 minutes. This property is not present when a session is updated via the WebSocket API.
client_secret: object { expires_at, value } Ephemeral key returned by the API. Only present when the session is
created on the server via REST API.
Ephemeral key returned by the API. Only present when the session is created on the server via REST API.
Timestamp for when the token expires. Currently, all tokens expire after one minute.
Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.
Configuration of the transcription model.
modalities: optional array of "text" or "audio"The set of modalities the model can respond with. To disable audio,
set this to ["text"].
The set of modalities the model can respond with. To disable audio, set this to ["text"].
turn_detection: optional object { prefix_padding_ms, silence_duration_ms, threshold, type } Configuration for turn detection. Can be set to null to turn off. Server
VAD means that the model will detect the start and end of speech based on
audio volume and respond at the end of user speech.
Configuration for turn detection. Can be set to null to turn off. Server
VAD means that the model will detect the start and end of speech based on
audio volume and respond at the end of user speech.
Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.
Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.
Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.
Type of turn detection, only server_vad is currently supported.
The event type, must be transcription_session.updated.
RealtimeClient Secrets
Create client secret
ModelsExpand Collapse
RealtimeSessionClientSecret = object { expires_at, value } Ephemeral key returned by the API.
Ephemeral key returned by the API.
Timestamp for when the token expires. Currently, all tokens expire after one minute.
Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.
RealtimeSessionCreateResponse = object { client_secret, type, audio, 10 more } A new Realtime session configuration, with an ephemeral key. Default TTL
for keys is one minute.
A new Realtime session configuration, with an ephemeral key. Default TTL for keys is one minute.
Ephemeral key returned by the API.
The type of session to create. Always realtime for the Realtime API.
audio: optional object { input, output } Configuration for input and output audio.
Configuration for input and output audio.
input: optional object { format, noise_reduction, transcription, turn_detection }
The format of the input audio.
noise_reduction: optional object { type } Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Configuration for input audio noise reduction. This can be set to null to turn off.
Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model.
Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
turn_detection: optional object { type, create_response, idle_timeout_ms, 4 more } or object { type, create_response, eagerness, interrupt_response } Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.
Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.
Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.
ServerVad = object { type, create_response, idle_timeout_ms, 4 more } Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.
Type of turn detection, server_vad to turn on simple Server VAD.
Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.
The timeout value will be applied after the last model response's audio has finished playing,
i.e. it's set to the response.done time plus audio playback duration.
An input_audio_buffer.timeout_triggered event (plus events
associated with the Response) will be emitted when the timeout is reached.
Idle timeout is currently only supported for server_vad mode.
Whether or not to automatically interrupt (cancel) any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.
If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.
Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in
milliseconds). Defaults to 300ms.
Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults
to 500ms. With shorter values the model will respond more quickly,
but may jump in on short pauses from the user.
Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A
higher threshold will require louder audio to activate the model, and
thus might perform better in noisy environments.
SemanticVad = object { type, create_response, eagerness, interrupt_response } Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Server-side semantic turn detection which uses a model to determine when the user has finished speaking.
Type of turn detection, semantic_vad to turn on Semantic VAD.
Whether or not to automatically generate a response when a VAD stop event occurs.
eagerness: optional "low" or "medium" or "high" or "auto"Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.
Whether or not to automatically interrupt any ongoing response with output to the default
conversation (i.e. conversation of auto) when a VAD start event occurs.
output: optional object { format, speed, voice }
The format of the output audio.
The speed of the model's spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.
This parameter is a post-processing adjustment to the audio after it is generated, it's also possible to prompt the model to speak faster or slower.
voice: optional string or "alloy" or "ash" or "ballad" or 7 moreThe voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
The voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
UnionMember1 = "alloy" or "ash" or "ballad" or 7 moreThe voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
The voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo, sage,
shimmer, verse, marin, and cedar. We recommend marin and cedar for
best quality.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.
Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.
max_output_tokens: optional number or "inf"Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.
model: optional string or "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
UnionMember1 = "gpt-realtime" or "gpt-realtime-2025-08-28" or "gpt-4o-realtime-preview" or 11 moreThe Realtime model used for this session.
The Realtime model used for this session.
output_modalities: optional array of "text" or "audio"The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
The set of modalities the model can respond with. It defaults to ["audio"], indicating
that the model will respond with audio plus a transcript. ["text"] can be used to make
the model respond with text only. It is not possible to request both text and audio at the same time.
Reference to a prompt template and its variables. Learn more.
tool_choice: optional ToolChoiceOptions or ToolChoiceFunction { name, type } or ToolChoiceMcp { server_label, type, name } How the model chooses tools. Provide one of the string modes or force a specific
function/MCP tool.
How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.
ToolChoiceOptions = "none" or "auto" or "required"Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
ToolChoiceFunction = object { name, type } Use this option to force the model to call a specific function.
Use this option to force the model to call a specific function.
The name of the function to call.
For function calling, the type is always function.
ToolChoiceMcp = object { server_label, type, name } Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific tool on a remote MCP server.
The label of the MCP server to use.
For MCP tools, the type is always mcp.
The name of the tool to call on the server.
tools: optional array of RealtimeFunctionTool { description, name, parameters, type } or object { server_label, type, allowed_tools, 6 more } Tools available to the model.
Tools available to the model.
RealtimeFunctionTool = object { description, name, parameters, type }
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).
The name of the function.
Parameters of the function in JSON Schema.
The type of the tool, i.e. function.
McpTool = object { server_label, type, allowed_tools, 6 more } Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
A label for this MCP server, used to identify it in tool calls.
The type of the MCP tool. Always mcp.
allowed_tools: optional array of string or object { read_only, tool_names } List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
A string array of allowed tool names
McpToolFilter = object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: optional "connector_dropbox" or "connector_gmail" or "connector_googlecalendar" or 5 moreIdentifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: optional object { always, never } or "always" or "never"Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval.
McpToolApprovalFilter = object { always, never } Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
never: optional object { read_only, tool_names } A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
List of allowed tool names.
McpToolApprovalSetting = "always" or "never"Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Specify a single approval policy for all tools. One of always or
never. When set to always, all tools will require approval. When
set to never, all tools will not require approval.
Optional description of the MCP server, used to provide more context.
The URL for the MCP server. One of server_url or connector_id must be
provided.
tracing: optional "auto" or object { group_id, metadata, workflow_name } Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once
tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.
auto will create a trace for the session with default values for the
workflow name, group id, and metadata.
Enables tracing and sets default values for tracing configuration options. Always auto.
TracingConfiguration = object { group_id, metadata, workflow_name } Granular configuration for tracing.
Granular configuration for tracing.
The group id to attach to this trace to enable filtering and grouping in the Traces Dashboard.
The arbitrary metadata to attach to this trace to enable filtering in the Traces Dashboard.
The name of the workflow to attach to this trace. This is used to name the trace in the Traces Dashboard.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.
Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.
Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
RealtimeTranscriptionSessionCreateResponse = object { id, object, type, 3 more } A Realtime transcription session configuration object.
A Realtime transcription session configuration object.
Unique identifier for the session that looks like sess_1234567890abcdef.
The object type. Always realtime.transcription_session.
The type of session. Always transcription for transcription sessions.
audio: optional object { input } Configuration for input audio for the session.
Configuration for input audio for the session.
input: optional object { format, noise_reduction, transcription, turn_detection }
The PCM audio format. Only a 24kHz sample rate is supported.
noise_reduction: optional object { type } Configuration for input audio noise reduction.
Configuration for input audio noise reduction.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
Configuration of the transcription model.
Configuration for turn detection. Can be set to null to turn off. Server
VAD means that the model will detect the start and end of speech based on
audio volume and respond at the end of user speech.
Expiration timestamp for the session, in seconds since epoch.
Additional fields to include in server outputs.
item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
RealtimeTranscriptionSessionTurnDetection = object { prefix_padding_ms, silence_duration_ms, threshold, type } Configuration for turn detection. Can be set to null to turn off. Server
VAD means that the model will detect the start and end of speech based on
audio volume and respond at the end of user speech.
Configuration for turn detection. Can be set to null to turn off. Server
VAD means that the model will detect the start and end of speech based on
audio volume and respond at the end of user speech.
Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.
Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.
Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.
Type of turn detection, only server_vad is currently supported.