Skip to content
Primary navigation

Realtime

ModelsExpand Collapse
AudioTranscription { language, model, prompt }
language?: string

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

model?: (string & {}) | "whisper-1" | "gpt-4o-mini-transcribe" | "gpt-4o-mini-transcribe-2025-12-15" | 2 more

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

One of the following:
(string & {})
"whisper-1" | "gpt-4o-mini-transcribe" | "gpt-4o-mini-transcribe-2025-12-15" | 2 more
"whisper-1"
"gpt-4o-mini-transcribe"
"gpt-4o-mini-transcribe-2025-12-15"
"gpt-4o-transcribe"
"gpt-4o-transcribe-diarize"
prompt?: string

An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology".

ConversationCreatedEvent { conversation, event_id, type }

Returned when a conversation is created. Emitted right after session creation.

conversation: Conversation { id, object }

The conversation resource.

id?: string

The unique ID of the conversation.

object?: "realtime.conversation"

The object type, must be realtime.conversation.

event_id: string

The unique ID of the server event.

type: "conversation.created"

The event type, must be conversation.created.

ConversationItem = RealtimeConversationItemSystemMessage { content, role, type, 3 more } | RealtimeConversationItemUserMessage { content, role, type, 3 more } | RealtimeConversationItemAssistantMessage { content, role, type, 3 more } | 6 more

A single item within a Realtime conversation.

One of the following:
RealtimeConversationItemSystemMessage { content, role, type, 3 more }

A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.

content: Array<Content>

The content of the message.

text?: string

The text content.

type?: "input_text"

The content type. Always input_text for system messages.

role: "system"

The role of the message sender. Always system.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemUserMessage { content, role, type, 3 more }

A user message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

detail?: "auto" | "low" | "high"

The detail level of the image (for input_image). auto will default to high.

One of the following:
"auto"
"low"
"high"
image_url?: string

Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.

text?: string

The text content (for input_text).

transcript?: string

Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.

type?: "input_text" | "input_audio" | "input_image"

The content type (input_text, input_audio, or input_image).

One of the following:
"input_text"
"input_audio"
"input_image"
role: "user"

The role of the message sender. Always user.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemAssistantMessage { content, role, type, 3 more }

An assistant message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

text?: string

The text content.

transcript?: string

The transcript of the audio content, this will always be present if the output type is audio.

type?: "output_text" | "output_audio"

The content type, output_text or output_audio depending on the session output_modalities configuration.

One of the following:
"output_text"
"output_audio"
role: "assistant"

The role of the message sender. Always assistant.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCall { arguments, name, type, 4 more }

A function call item in a Realtime conversation.

arguments: string

The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.

name: string

The name of the function being called.

type: "function_call"

The type of the item. Always function_call.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

call_id?: string

The ID of the function call.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCallOutput { call_id, output, type, 3 more }

A function call output item in a Realtime conversation.

call_id: string

The ID of the function call this output is for.

output: string

The output of the function call, this is free text and can contain any information or simply be empty.

type: "function_call_output"

The type of the item. Always function_call_output.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeMcpApprovalResponse { id, approval_request_id, approve, 2 more }

A Realtime item responding to an MCP approval request.

id: string

The unique ID of the approval response.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

reason?: string | null

Optional reason for the decision.

RealtimeMcpListTools { server_label, tools, type, id }

A Realtime item listing tools available on an MCP server.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

id?: string

The unique ID of the list.

RealtimeMcpToolCall { id, arguments, name, 5 more }

A Realtime item representing an invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

The ID of an associated approval request, if any.

error?: RealtimeMcpProtocolError { code, message, type } | RealtimeMcpToolExecutionError { message, type } | RealtimeMcphttpError { code, message, type } | null

The error from the tool call, if any.

One of the following:
RealtimeMcpProtocolError { code, message, type }
code: number
message: string
type: "protocol_error"
RealtimeMcpToolExecutionError { message, type }
message: string
type: "tool_execution_error"
RealtimeMcphttpError { code, message, type }
code: number
message: string
type: "http_error"
output?: string | null

The output from the tool call.

RealtimeMcpApprovalRequest { id, arguments, name, 2 more }

A Realtime item requesting human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

ConversationItemAdded { event_id, item, type, previous_item_id }

Sent by the server when an Item is added to the default Conversation. This can happen in several cases:

  • When the client sends a conversation.item.create event.
  • When the input audio buffer is committed. In this case the item will be a user message containing the audio from the buffer.
  • When the model is generating a Response. In this case the conversation.item.added event will be sent when the model starts generating a specific Item, and thus it will not yet have any content (and status will be in_progress).

The event will include the full content of the Item (except when model is generating a Response) except for audio data, which can be retrieved separately with a conversation.item.retrieve event if necessary.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

type: "conversation.item.added"

The event type, must be conversation.item.added.

previous_item_id?: string | null

The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.

ConversationItemCreateEvent { item, type, event_id, previous_item_id }

Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.

If successful, the server will respond with a conversation.item.created event, otherwise an error event will be sent.

A single item within a Realtime conversation.

type: "conversation.item.create"

The event type, must be conversation.item.create.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
previous_item_id?: string

The ID of the preceding item after which the new item will be inserted. If not set, the new item will be appended to the end of the conversation.

If set to root, the new item will be added to the beginning of the conversation.

If set to an existing ID, it allows an item to be inserted mid-conversation. If the ID cannot be found, an error will be returned and the item will not be added.

ConversationItemCreatedEvent { event_id, item, type, previous_item_id }

Returned when a conversation item is created. There are several scenarios that produce this event:

  • The server is generating a Response, which if successful will produce either one or two Items, which will be of type message (role assistant) or type function_call.
  • The input audio buffer has been committed, either by the client or the server (in server_vad mode). The server will take the content of the input audio buffer and add it to a new user message Item.
  • The client has sent a conversation.item.create event to add a new Item to the Conversation.
event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

type: "conversation.item.created"

The event type, must be conversation.item.created.

previous_item_id?: string | null

The ID of the preceding item in the Conversation context, allows the client to understand the order of the conversation. Can be null if the item has no predecessor.

ConversationItemDeleteEvent { item_id, type, event_id }

Send this event when you want to remove any item from the conversation history. The server will respond with a conversation.item.deleted event, unless the item does not exist in the conversation history, in which case the server will respond with an error.

item_id: string

The ID of the item to delete.

type: "conversation.item.delete"

The event type, must be conversation.item.delete.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
ConversationItemDeletedEvent { event_id, item_id, type }

Returned when an item in the conversation is deleted by the client with a conversation.item.delete event. This event is used to synchronize the server's understanding of the conversation history with the client's view.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item that was deleted.

type: "conversation.item.deleted"

The event type, must be conversation.item.deleted.

ConversationItemDone { event_id, item, type, previous_item_id }

Returned when a conversation item is finalized.

The event will include the full content of the Item except for audio data, which can be retrieved separately with a conversation.item.retrieve event if needed.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

type: "conversation.item.done"

The event type, must be conversation.item.done.

previous_item_id?: string | null

The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.

ConversationItemInputAudioTranscriptionCompletedEvent { content_index, event_id, item_id, 4 more }

This event is the output of audio transcription for user audio written to the user audio buffer. Transcription begins when the input audio buffer is committed by the client or server (when VAD is enabled). Transcription runs asynchronously with Response creation, so this event may come before or after the Response events.

Realtime API models accept audio natively, and thus input transcription is a separate process run on a separate ASR (Automatic Speech Recognition) model. The transcript may diverge somewhat from the model's interpretation, and should be treated as a rough guide.

content_index: number

The index of the content part containing the audio.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item containing the audio that is being transcribed.

transcript: string

The transcribed text.

type: "conversation.item.input_audio_transcription.completed"

The event type, must be conversation.item.input_audio_transcription.completed.

usage: TranscriptTextUsageTokens { input_tokens, output_tokens, total_tokens, 2 more } | TranscriptTextUsageDuration { seconds, type }

Usage statistics for the transcription, this is billed according to the ASR model's pricing rather than the realtime model's pricing.

One of the following:
TranscriptTextUsageTokens { input_tokens, output_tokens, total_tokens, 2 more }

Usage statistics for models billed by token usage.

input_tokens: number

Number of input tokens billed for this request.

output_tokens: number

Number of output tokens generated.

total_tokens: number

Total number of tokens used (input + output).

type: "tokens"

The type of the usage object. Always tokens for this variant.

input_token_details?: InputTokenDetails { audio_tokens, text_tokens }

Details about the input tokens billed for this request.

audio_tokens?: number

Number of audio tokens billed for this request.

text_tokens?: number

Number of text tokens billed for this request.

TranscriptTextUsageDuration { seconds, type }

Usage statistics for models billed by audio input duration.

seconds: number

Duration of the input audio in seconds.

type: "duration"

The type of the usage object. Always duration for this variant.

logprobs?: Array<LogProbProperties { token, bytes, logprob } > | null

The log probabilities of the transcription.

token: string

The token that was used to generate the log probability.

bytes: Array<number>

The bytes that were used to generate the log probability.

logprob: number

The log probability of the token.

ConversationItemInputAudioTranscriptionDeltaEvent { event_id, item_id, type, 3 more }

Returned when the text value of an input audio transcription content part is updated with incremental transcription results.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item containing the audio that is being transcribed.

type: "conversation.item.input_audio_transcription.delta"

The event type, must be conversation.item.input_audio_transcription.delta.

content_index?: number

The index of the content part in the item's content array.

delta?: string

The text delta.

logprobs?: Array<LogProbProperties { token, bytes, logprob } > | null

The log probabilities of the transcription. These can be enabled by configurating the session with "include": ["item.input_audio_transcription.logprobs"]. Each entry in the array corresponds a log probability of which token would be selected for this chunk of transcription. This can help to identify if it was possible there were multiple valid options for a given chunk of transcription.

token: string

The token that was used to generate the log probability.

bytes: Array<number>

The bytes that were used to generate the log probability.

logprob: number

The log probability of the token.

ConversationItemInputAudioTranscriptionFailedEvent { content_index, error, event_id, 2 more }

Returned when input audio transcription is configured, and a transcription request for a user message failed. These events are separate from other error events so that the client can identify the related Item.

content_index: number

The index of the content part containing the audio.

error: Error { code, message, param, type }

Details of the transcription error.

code?: string

Error code, if any.

message?: string

A human-readable error message.

param?: string

Parameter related to the error, if any.

type?: string

The type of error.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item.

type: "conversation.item.input_audio_transcription.failed"

The event type, must be conversation.item.input_audio_transcription.failed.

ConversationItemInputAudioTranscriptionSegment { id, content_index, end, 6 more }

Returned when an input audio transcription segment is identified for an item.

id: string

The segment identifier.

content_index: number

The index of the input audio content part within the item.

end: number

End time of the segment in seconds.

formatfloat
event_id: string

The unique ID of the server event.

item_id: string

The ID of the item containing the input audio content.

speaker: string

The detected speaker label for this segment.

start: number

Start time of the segment in seconds.

formatfloat
text: string

The text for this segment.

type: "conversation.item.input_audio_transcription.segment"

The event type, must be conversation.item.input_audio_transcription.segment.

ConversationItemRetrieveEvent { item_id, type, event_id }

Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD. The server will respond with a conversation.item.retrieved event, unless the item does not exist in the conversation history, in which case the server will respond with an error.

item_id: string

The ID of the item to retrieve.

type: "conversation.item.retrieve"

The event type, must be conversation.item.retrieve.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
ConversationItemTruncateEvent { audio_end_ms, content_index, item_id, 2 more }

Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.

Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.

If successful, the server will respond with a conversation.item.truncated event.

audio_end_ms: number

Inclusive duration up to which audio is truncated, in milliseconds. If the audio_end_ms is greater than the actual audio duration, the server will respond with an error.

content_index: number

The index of the content part to truncate. Set this to 0.

item_id: string

The ID of the assistant message item to truncate. Only assistant message items can be truncated.

type: "conversation.item.truncate"

The event type, must be conversation.item.truncate.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
ConversationItemTruncatedEvent { audio_end_ms, content_index, event_id, 2 more }

Returned when an earlier assistant audio message item is truncated by the client with a conversation.item.truncate event. This event is used to synchronize the server's understanding of the audio with the client's playback.

This action will truncate the audio and remove the server-side text transcript to ensure there is no text in the context that hasn't been heard by the user.

audio_end_ms: number

The duration up to which the audio was truncated, in milliseconds.

content_index: number

The index of the content part that was truncated.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the assistant message item that was truncated.

type: "conversation.item.truncated"

The event type, must be conversation.item.truncated.

ConversationItemWithReference { id, arguments, call_id, 7 more }

The item to add to the conversation.

id?: string

For an item of type (message | function_call | function_call_output) this field allows the client to assign the unique ID of the item. It is not required because the server will generate one if not provided.

For an item of type item_reference, this field is required and is a reference to any item that has previously existed in the conversation.

arguments?: string

The arguments of the function call (for function_call items).

call_id?: string

The ID of the function call (for function_call and function_call_output items). If passed on a function_call_output item, the server will check that a function_call item with the same ID exists in the conversation history.

content?: Array<Content>

The content of the message, applicable for message items.

  • Message items of role system support only input_text content
  • Message items of role user support input_text and input_audio content
  • Message items of role assistant support text content.
id?: string

ID of a previous conversation item to reference (for item_reference content types in response.create events). These can reference both client and server created items.

audio?: string

Base64-encoded audio bytes, used for input_audio content type.

text?: string

The text content, used for input_text and text content types.

transcript?: string

The transcript of the audio, used for input_audio content type.

type?: "input_text" | "input_audio" | "item_reference" | "text"

The content type (input_text, input_audio, item_reference, text).

One of the following:
"input_text"
"input_audio"
"item_reference"
"text"
name?: string

The name of the function being called (for function_call items).

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item.

output?: string

The output of the function call (for function_call_output items).

role?: "user" | "assistant" | "system"

The role of the message sender (user, assistant, system), only applicable for message items.

One of the following:
"user"
"assistant"
"system"
status?: "completed" | "incomplete" | "in_progress"

The status of the item (completed, incomplete, in_progress). These have no effect on the conversation, but are accepted for consistency with the conversation.item.created event.

One of the following:
"completed"
"incomplete"
"in_progress"
type?: "message" | "function_call" | "function_call_output" | "item_reference"

The type of the item (message, function_call, function_call_output, item_reference).

One of the following:
"message"
"function_call"
"function_call_output"
"item_reference"
InputAudioBufferAppendEvent { audio, type, event_id }

Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. A "commit" will create a new user message item in the conversation history from the buffer content and clear the buffer. Input audio transcription (if enabled) will be generated when the buffer is committed.

If VAD is enabled the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually. Input audio noise reduction operates on writes to the audio buffer.

The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike most other client events, the server will not send a confirmation response to this event.

audio: string

Base64-encoded audio bytes. This must be in the format specified by the input_audio_format field in the session configuration.

type: "input_audio_buffer.append"

The event type, must be input_audio_buffer.append.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
InputAudioBufferClearEvent { type, event_id }

Send this event to clear the audio bytes in the buffer. The server will respond with an input_audio_buffer.cleared event.

type: "input_audio_buffer.clear"

The event type, must be input_audio_buffer.clear.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
InputAudioBufferClearedEvent { event_id, type }

Returned when the input audio buffer is cleared by the client with a input_audio_buffer.clear event.

event_id: string

The unique ID of the server event.

type: "input_audio_buffer.cleared"

The event type, must be input_audio_buffer.cleared.

InputAudioBufferCommitEvent { type, event_id }

Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.

Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.

type: "input_audio_buffer.commit"

The event type, must be input_audio_buffer.commit.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
InputAudioBufferCommittedEvent { event_id, item_id, type, previous_item_id }

Returned when an input audio buffer is committed, either by the client or automatically in server VAD mode. The item_id property is the ID of the user message item that will be created, thus a conversation.item.created event will also be sent to the client.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item that will be created.

type: "input_audio_buffer.committed"

The event type, must be input_audio_buffer.committed.

previous_item_id?: string | null

The ID of the preceding item after which the new item will be inserted. Can be null if the item has no predecessor.

InputAudioBufferDtmfEventReceivedEvent { event, received_at, type }

SIP Only: Returned when an DTMF event is received. A DTMF event is a message that represents a telephone keypad press (0–9, *, #, A–D). The event property is the keypad that the user press. The received_at is the UTC Unix Timestamp that the server received the event.

event: string

The telephone keypad that was pressed by the user.

received_at: number

UTC Unix Timestamp when DTMF Event was received by server.

type: "input_audio_buffer.dtmf_event_received"

The event type, must be input_audio_buffer.dtmf_event_received.

InputAudioBufferSpeechStartedEvent { audio_start_ms, event_id, item_id, type }

Sent by the server when in server_vad mode to indicate that speech has been detected in the audio buffer. This can happen any time audio is added to the buffer (unless speech is already detected). The client may want to use this event to interrupt audio playback or provide visual feedback to the user.

The client should expect to receive a input_audio_buffer.speech_stopped event when speech stops. The item_id property is the ID of the user message item that will be created when speech stops and will also be included in the input_audio_buffer.speech_stopped event (unless the client manually commits the audio buffer during VAD activation).

audio_start_ms: number

Milliseconds from the start of all audio written to the buffer during the session when speech was first detected. This will correspond to the beginning of audio sent to the model, and thus includes the prefix_padding_ms configured in the Session.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item that will be created when speech stops.

type: "input_audio_buffer.speech_started"

The event type, must be input_audio_buffer.speech_started.

InputAudioBufferSpeechStoppedEvent { audio_end_ms, event_id, item_id, type }

Returned in server_vad mode when the server detects the end of speech in the audio buffer. The server will also send an conversation.item.created event with the user message item that is created from the audio buffer.

audio_end_ms: number

Milliseconds since the session started when speech stopped. This will correspond to the end of audio sent to the model, and thus includes the min_silence_duration_ms configured in the Session.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item that will be created.

type: "input_audio_buffer.speech_stopped"

The event type, must be input_audio_buffer.speech_stopped.

InputAudioBufferTimeoutTriggered { audio_end_ms, audio_start_ms, event_id, 2 more }

Returned when the Server VAD timeout is triggered for the input audio buffer. This is configured with idle_timeout_ms in the turn_detection settings of the session, and it indicates that there hasn't been any speech detected for the configured duration.

The audio_start_ms and audio_end_ms fields indicate the segment of audio after the last model response up to the triggering time, as an offset from the beginning of audio written to the input audio buffer. This means it demarcates the segment of audio that was silent and the difference between the start and end values will roughly match the configured timeout.

The empty audio will be committed to the conversation as an input_audio item (there will be a input_audio_buffer.committed event) and a model response will be generated. There may be speech that didn't trigger VAD but is still detected by the model, so the model may respond with something relevant to the conversation or a prompt to continue speaking.

audio_end_ms: number

Millisecond offset of audio written to the input audio buffer at the time the timeout was triggered.

audio_start_ms: number

Millisecond offset of audio written to the input audio buffer that was after the playback time of the last model response.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item associated with this segment.

type: "input_audio_buffer.timeout_triggered"

The event type, must be input_audio_buffer.timeout_triggered.

LogProbProperties { token, bytes, logprob }

A log probability object.

token: string

The token that was used to generate the log probability.

bytes: Array<number>

The bytes that were used to generate the log probability.

logprob: number

The log probability of the token.

McpListToolsCompleted { event_id, item_id, type }

Returned when listing MCP tools has completed for an item.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP list tools item.

type: "mcp_list_tools.completed"

The event type, must be mcp_list_tools.completed.

McpListToolsFailed { event_id, item_id, type }

Returned when listing MCP tools has failed for an item.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP list tools item.

type: "mcp_list_tools.failed"

The event type, must be mcp_list_tools.failed.

McpListToolsInProgress { event_id, item_id, type }

Returned when listing MCP tools is in progress for an item.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP list tools item.

type: "mcp_list_tools.in_progress"

The event type, must be mcp_list_tools.in_progress.

NoiseReductionType = "near_field" | "far_field"

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

One of the following:
"near_field"
"far_field"
OutputAudioBufferClearEvent { type, event_id }

WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to stop generating audio and emit a output_audio_buffer.cleared event. This event should be preceded by a response.cancel client event to stop the generation of the current response. Learn more.

type: "output_audio_buffer.clear"

The event type, must be output_audio_buffer.clear.

event_id?: string

The unique ID of the client event used for error handling.

RateLimitsUpdatedEvent { event_id, rate_limits, type }

Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be "reserved" for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.

event_id: string

The unique ID of the server event.

rate_limits: Array<RateLimit>

List of rate limit information.

limit?: number

The maximum allowed value for the rate limit.

name?: "requests" | "tokens"

The name of the rate limit (requests, tokens).

One of the following:
"requests"
"tokens"
remaining?: number

The remaining value before the limit is reached.

reset_seconds?: number

Seconds until the rate limit resets.

type: "rate_limits.updated"

The event type, must be rate_limits.updated.

RealtimeAudioConfig { input, output }

Configuration for input and output audio.

input?: RealtimeAudioConfigInput { format, noise_reduction, transcription, turn_detection }
output?: RealtimeAudioConfigOutput { format, speed, voice }
RealtimeAudioConfigInput { format, noise_reduction, transcription, turn_detection }

The format of the input audio.

noise_reduction?: NoiseReduction { type }

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

transcription?: AudioTranscription { language, model, prompt }

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

turn_detection?: RealtimeAudioInputTurnDetection | null

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

RealtimeAudioConfigOutput { format, speed, voice }

The format of the output audio.

speed?: number

The speed of the model's spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

This parameter is a post-processing adjustment to the audio after it is generated, it's also possible to prompt the model to speak faster or slower.

maximum1.5
minimum0.25
voice?: string | "alloy" | "ash" | "ballad" | 7 more | ID { id }

The voice the model uses to respond. Supported built-in voices are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. You may also provide a custom voice object with an id, for example { "id": "voice_1234" }. Voice cannot be changed during the session once the model has responded with audio at least once. We recommend marin and cedar for best quality.

One of the following:
string
"alloy" | "ash" | "ballad" | 7 more
"alloy"
"ash"
"ballad"
"coral"
"echo"
"sage"
"shimmer"
"verse"
"marin"
"cedar"
ID { id }

Custom voice reference.

id: string

The custom voice ID, e.g. voice_1234.

RealtimeAudioFormats = AudioPCM { rate, type } | AudioPCMU { type } | AudioPCMA { type }

The PCM audio format. Only a 24kHz sample rate is supported.

One of the following:
AudioPCM { rate, type }

The PCM audio format. Only a 24kHz sample rate is supported.

rate?: 24000

The sample rate of the audio. Always 24000.

type?: "audio/pcm"

The audio format. Always audio/pcm.

AudioPCMU { type }

The G.711 μ-law format.

type?: "audio/pcmu"

The audio format. Always audio/pcmu.

AudioPCMA { type }

The G.711 A-law format.

type?: "audio/pcma"

The audio format. Always audio/pcma.

RealtimeAudioInputTurnDetection = ServerVad { type, create_response, idle_timeout_ms, 4 more } | SemanticVad { type, create_response, eagerness, interrupt_response } | null

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

One of the following:
ServerVad { type, create_response, idle_timeout_ms, 4 more }

Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.

type: "server_vad"

Type of turn detection, server_vad to turn on simple Server VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

idle_timeout_ms?: number | null

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response's audio has finished playing, i.e. it's set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

minimum5000
maximum30000
interrupt_response?: boolean

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

prefix_padding_ms?: number

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: number

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: number

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

SemanticVad { type, create_response, eagerness, interrupt_response }

Server-side semantic turn detection which uses a model to determine when the user has finished speaking.

type: "semantic_vad"

Type of turn detection, semantic_vad to turn on Semantic VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs.

eagerness?: "low" | "medium" | "high" | "auto"

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

One of the following:
"low"
"medium"
"high"
"auto"
interrupt_response?: boolean

Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

RealtimeClientEvent = ConversationItemCreateEvent { item, type, event_id, previous_item_id } | ConversationItemDeleteEvent { item_id, type, event_id } | ConversationItemRetrieveEvent { item_id, type, event_id } | 8 more

A realtime client event.

One of the following:
ConversationItemCreateEvent { item, type, event_id, previous_item_id }

Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.

If successful, the server will respond with a conversation.item.created event, otherwise an error event will be sent.

A single item within a Realtime conversation.

type: "conversation.item.create"

The event type, must be conversation.item.create.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
previous_item_id?: string

The ID of the preceding item after which the new item will be inserted. If not set, the new item will be appended to the end of the conversation.

If set to root, the new item will be added to the beginning of the conversation.

If set to an existing ID, it allows an item to be inserted mid-conversation. If the ID cannot be found, an error will be returned and the item will not be added.

ConversationItemDeleteEvent { item_id, type, event_id }

Send this event when you want to remove any item from the conversation history. The server will respond with a conversation.item.deleted event, unless the item does not exist in the conversation history, in which case the server will respond with an error.

item_id: string

The ID of the item to delete.

type: "conversation.item.delete"

The event type, must be conversation.item.delete.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
ConversationItemRetrieveEvent { item_id, type, event_id }

Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD. The server will respond with a conversation.item.retrieved event, unless the item does not exist in the conversation history, in which case the server will respond with an error.

item_id: string

The ID of the item to retrieve.

type: "conversation.item.retrieve"

The event type, must be conversation.item.retrieve.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
ConversationItemTruncateEvent { audio_end_ms, content_index, item_id, 2 more }

Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.

Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.

If successful, the server will respond with a conversation.item.truncated event.

audio_end_ms: number

Inclusive duration up to which audio is truncated, in milliseconds. If the audio_end_ms is greater than the actual audio duration, the server will respond with an error.

content_index: number

The index of the content part to truncate. Set this to 0.

item_id: string

The ID of the assistant message item to truncate. Only assistant message items can be truncated.

type: "conversation.item.truncate"

The event type, must be conversation.item.truncate.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
InputAudioBufferAppendEvent { audio, type, event_id }

Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. A "commit" will create a new user message item in the conversation history from the buffer content and clear the buffer. Input audio transcription (if enabled) will be generated when the buffer is committed.

If VAD is enabled the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually. Input audio noise reduction operates on writes to the audio buffer.

The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike most other client events, the server will not send a confirmation response to this event.

audio: string

Base64-encoded audio bytes. This must be in the format specified by the input_audio_format field in the session configuration.

type: "input_audio_buffer.append"

The event type, must be input_audio_buffer.append.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
InputAudioBufferClearEvent { type, event_id }

Send this event to clear the audio bytes in the buffer. The server will respond with an input_audio_buffer.cleared event.

type: "input_audio_buffer.clear"

The event type, must be input_audio_buffer.clear.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
OutputAudioBufferClearEvent { type, event_id }

WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to stop generating audio and emit a output_audio_buffer.cleared event. This event should be preceded by a response.cancel client event to stop the generation of the current response. Learn more.

type: "output_audio_buffer.clear"

The event type, must be output_audio_buffer.clear.

event_id?: string

The unique ID of the client event used for error handling.

InputAudioBufferCommitEvent { type, event_id }

Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.

Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.

type: "input_audio_buffer.commit"

The event type, must be input_audio_buffer.commit.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
ResponseCancelEvent { type, event_id, response_id }

Send this event to cancel an in-progress response. The server will respond with a response.done event with a status of response.status=cancelled. If there is no response to cancel, the server will respond with an error. It's safe to call response.cancel even if no response is in progress, an error will be returned the session will remain unaffected.

type: "response.cancel"

The event type, must be response.cancel.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
response_id?: string

A specific response ID to cancel - if not provided, will cancel an in-progress response in the default conversation.

ResponseCreateEvent { type, event_id, response }

This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.

A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history by default.

The server will respond with a response.created event, events for Items and content created, and finally a response.done event to indicate the Response is complete.

The response.create event includes inference configuration like instructions and tools. If these are set, they will override the Session's configuration for this Response only.

Responses can be created out-of-band of the default Conversation, meaning that they can have arbitrary input, and it's possible to disable writing the output to the Conversation. Only one Response can write to the default Conversation at a time, but otherwise multiple Responses can be created in parallel. The metadata field is a good way to disambiguate multiple simultaneous Responses.

Clients can set conversation to none to create a Response that does not write to the default Conversation. Arbitrary input can be provided with the input field, which is an array accepting raw Items and references to existing Items.

type: "response.create"

The event type, must be response.create.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
response?: RealtimeResponseCreateParams { audio, conversation, input, 7 more }

Create a new Realtime response with these parameters

SessionUpdateEvent { session, type, event_id }

Send this event to update the session’s configuration. The client may send this event at any time to update any field except for voice and model. voice can be updated only if there have been no other audio outputs yet.

When the server receives a session.update, it will respond with a session.updated event showing the full, effective configuration. Only the fields that are present in the session.update are updated. To clear a field like instructions, pass an empty string. To clear a field like tools, pass an empty array. To clear a field like turn_detection, pass null.

session: RealtimeSessionCreateRequest { type, audio, include, 9 more } | RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Update the Realtime session. Choose either a realtime session or a transcription session.

One of the following:
RealtimeSessionCreateRequest { type, audio, include, 9 more }

Realtime session object configuration.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: RealtimeAudioConfig { input, output }

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: RealtimeToolsConfig { , }

Tools available to the model.

tracing?: RealtimeTracingConfig | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Realtime transcription session object configuration.

type: "transcription"

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

type: "session.update"

The event type, must be session.update.

event_id?: string

Optional client-generated ID used to identify this event. This is an arbitrary string that a client may assign. It will be passed back if there is an error with the event, but the corresponding session.updated event will not include it.

maxLength512
RealtimeConversationItemAssistantMessage { content, role, type, 3 more }

An assistant message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

text?: string

The text content.

transcript?: string

The transcript of the audio content, this will always be present if the output type is audio.

type?: "output_text" | "output_audio"

The content type, output_text or output_audio depending on the session output_modalities configuration.

One of the following:
"output_text"
"output_audio"
role: "assistant"

The role of the message sender. Always assistant.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCall { arguments, name, type, 4 more }

A function call item in a Realtime conversation.

arguments: string

The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.

name: string

The name of the function being called.

type: "function_call"

The type of the item. Always function_call.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

call_id?: string

The ID of the function call.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCallOutput { call_id, output, type, 3 more }

A function call output item in a Realtime conversation.

call_id: string

The ID of the function call this output is for.

output: string

The output of the function call, this is free text and can contain any information or simply be empty.

type: "function_call_output"

The type of the item. Always function_call_output.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemSystemMessage { content, role, type, 3 more }

A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.

content: Array<Content>

The content of the message.

text?: string

The text content.

type?: "input_text"

The content type. Always input_text for system messages.

role: "system"

The role of the message sender. Always system.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemUserMessage { content, role, type, 3 more }

A user message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

detail?: "auto" | "low" | "high"

The detail level of the image (for input_image). auto will default to high.

One of the following:
"auto"
"low"
"high"
image_url?: string

Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.

text?: string

The text content (for input_text).

transcript?: string

Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.

type?: "input_text" | "input_audio" | "input_image"

The content type (input_text, input_audio, or input_image).

One of the following:
"input_text"
"input_audio"
"input_image"
role: "user"

The role of the message sender. Always user.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeError { message, type, code, 2 more }

Details of the error.

message: string

A human-readable error message.

type: string

The type of error (e.g., "invalid_request_error", "server_error").

code?: string | null

Error code, if any.

event_id?: string | null

The event_id of the client event that caused the error, if applicable.

param?: string | null

Parameter related to the error, if any.

RealtimeErrorEvent { error, event_id, type }

Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.

error: RealtimeError { message, type, code, 2 more }

Details of the error.

event_id: string

The unique ID of the server event.

type: "error"

The event type, must be error.

RealtimeFunctionTool { description, name, parameters, type }
description?: string

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: string

The name of the function.

parameters?: unknown

Parameters of the function in JSON Schema.

type?: "function"

The type of the tool, i.e. function.

RealtimeMcpApprovalRequest { id, arguments, name, 2 more }

A Realtime item requesting human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

RealtimeMcpApprovalResponse { id, approval_request_id, approve, 2 more }

A Realtime item responding to an MCP approval request.

id: string

The unique ID of the approval response.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

reason?: string | null

Optional reason for the decision.

RealtimeMcpListTools { server_label, tools, type, id }

A Realtime item listing tools available on an MCP server.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

id?: string

The unique ID of the list.

RealtimeMcpProtocolError { code, message, type }
code: number
message: string
type: "protocol_error"
RealtimeMcpToolCall { id, arguments, name, 5 more }

A Realtime item representing an invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

The ID of an associated approval request, if any.

error?: RealtimeMcpProtocolError { code, message, type } | RealtimeMcpToolExecutionError { message, type } | RealtimeMcphttpError { code, message, type } | null

The error from the tool call, if any.

One of the following:
RealtimeMcpProtocolError { code, message, type }
code: number
message: string
type: "protocol_error"
RealtimeMcpToolExecutionError { message, type }
message: string
type: "tool_execution_error"
RealtimeMcphttpError { code, message, type }
code: number
message: string
type: "http_error"
output?: string | null

The output from the tool call.

RealtimeMcpToolExecutionError { message, type }
message: string
type: "tool_execution_error"
RealtimeMcphttpError { code, message, type }
code: number
message: string
type: "http_error"
RealtimeResponse { id, audio, conversation_id, 8 more }

The response resource.

id?: string

The unique ID of the response, will look like resp_1234.

audio?: Audio { output }

Configuration for audio output.

output?: Output { format, voice }

The format of the output audio.

voice?: (string & {}) | "alloy" | "ash" | "ballad" | 7 more

The voice the model uses to respond. Voice cannot be changed during the session once the model has responded with audio at least once. Current voice options are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. We recommend marin and cedar for best quality.

One of the following:
(string & {})
"alloy" | "ash" | "ballad" | 7 more
"alloy"
"ash"
"ballad"
"coral"
"echo"
"sage"
"shimmer"
"verse"
"marin"
"cedar"
conversation_id?: string

Which conversation the response is added to, determined by the conversation field in the response.create event. If auto, the response will be added to the default conversation and the value of conversation_id will be an id like conv_1234. If none, the response will not be added to any conversation and the value of conversation_id will be null. If responses are being triggered automatically by VAD the response will be added to the default conversation

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls, that was used in this response.

One of the following:
number
"inf"
"inf"
metadata?: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

object?: "realtime.response"

The object type, must be realtime.response.

output?: Array<ConversationItem>

The list of output items generated by the response.

One of the following:
RealtimeConversationItemSystemMessage { content, role, type, 3 more }

A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.

content: Array<Content>

The content of the message.

text?: string

The text content.

type?: "input_text"

The content type. Always input_text for system messages.

role: "system"

The role of the message sender. Always system.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemUserMessage { content, role, type, 3 more }

A user message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

detail?: "auto" | "low" | "high"

The detail level of the image (for input_image). auto will default to high.

One of the following:
"auto"
"low"
"high"
image_url?: string

Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.

text?: string

The text content (for input_text).

transcript?: string

Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.

type?: "input_text" | "input_audio" | "input_image"

The content type (input_text, input_audio, or input_image).

One of the following:
"input_text"
"input_audio"
"input_image"
role: "user"

The role of the message sender. Always user.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemAssistantMessage { content, role, type, 3 more }

An assistant message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

text?: string

The text content.

transcript?: string

The transcript of the audio content, this will always be present if the output type is audio.

type?: "output_text" | "output_audio"

The content type, output_text or output_audio depending on the session output_modalities configuration.

One of the following:
"output_text"
"output_audio"
role: "assistant"

The role of the message sender. Always assistant.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCall { arguments, name, type, 4 more }

A function call item in a Realtime conversation.

arguments: string

The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.

name: string

The name of the function being called.

type: "function_call"

The type of the item. Always function_call.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

call_id?: string

The ID of the function call.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCallOutput { call_id, output, type, 3 more }

A function call output item in a Realtime conversation.

call_id: string

The ID of the function call this output is for.

output: string

The output of the function call, this is free text and can contain any information or simply be empty.

type: "function_call_output"

The type of the item. Always function_call_output.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeMcpApprovalResponse { id, approval_request_id, approve, 2 more }

A Realtime item responding to an MCP approval request.

id: string

The unique ID of the approval response.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

reason?: string | null

Optional reason for the decision.

RealtimeMcpListTools { server_label, tools, type, id }

A Realtime item listing tools available on an MCP server.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

id?: string

The unique ID of the list.

RealtimeMcpToolCall { id, arguments, name, 5 more }

A Realtime item representing an invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

The ID of an associated approval request, if any.

error?: RealtimeMcpProtocolError { code, message, type } | RealtimeMcpToolExecutionError { message, type } | RealtimeMcphttpError { code, message, type } | null

The error from the tool call, if any.

One of the following:
RealtimeMcpProtocolError { code, message, type }
code: number
message: string
type: "protocol_error"
RealtimeMcpToolExecutionError { message, type }
message: string
type: "tool_execution_error"
RealtimeMcphttpError { code, message, type }
code: number
message: string
type: "http_error"
output?: string | null

The output from the tool call.

RealtimeMcpApprovalRequest { id, arguments, name, 2 more }

A Realtime item requesting human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

output_modalities?: Array<"text" | "audio">

The set of modalities the model used to respond, currently the only possible values are [\"audio\"], [\"text\"]. Audio output always include a text transcript. Setting the output to mode text will disable audio output from the model.

One of the following:
"text"
"audio"
status?: "completed" | "cancelled" | "failed" | 2 more

The final status of the response (completed, cancelled, failed, or incomplete, in_progress).

One of the following:
"completed"
"cancelled"
"failed"
"incomplete"
"in_progress"
status_details?: RealtimeResponseStatus { error, reason, type }

Additional details about the status.

usage?: RealtimeResponseUsage { input_token_details, input_tokens, output_token_details, 2 more }

Usage statistics for the Response, this will correspond to billing. A Realtime API session will maintain a conversation context and append new Items to the Conversation, thus output from previous turns (text and audio tokens) will become the input for later turns.

RealtimeResponseCreateAudioOutput { output }

Configuration for audio input and output.

output?: Output { format, voice }

The format of the output audio.

voice?: string | "alloy" | "ash" | "ballad" | 7 more | ID { id }

The voice the model uses to respond. Supported built-in voices are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. You may also provide a custom voice object with an id, for example { "id": "voice_1234" }. Voice cannot be changed during the session once the model has responded with audio at least once. We recommend marin and cedar for best quality.

One of the following:
string
"alloy" | "ash" | "ballad" | 7 more
"alloy"
"ash"
"ballad"
"coral"
"echo"
"sage"
"shimmer"
"verse"
"marin"
"cedar"
ID { id }

Custom voice reference.

id: string

The custom voice ID, e.g. voice_1234.

RealtimeResponseCreateMcpTool { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

One of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
One of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
defer_loading?: boolean

Whether this MCP tool is deferred and discovered via tool search.

headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

One of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

RealtimeResponseCreateParams { audio, conversation, input, 7 more }

Create a new Realtime response with these parameters

Configuration for audio input and output.

conversation?: (string & {}) | "auto" | "none"

Controls which conversation the response is added to. Currently supports auto and none, with auto as the default value. The auto value means that the contents of the response will be added to the default conversation. Set this to none to create an out-of-band response which will not add items to default conversation.

One of the following:
(string & {})
"auto" | "none"
"auto"
"none"
input?: Array<ConversationItem>

Input items to include in the prompt for the model. Using this field creates a new context for this Response instead of using the default conversation. An empty array [] will clear the context for this Response. Note that this can include references to items that previously appeared in the session using their id.

One of the following:
RealtimeConversationItemSystemMessage { content, role, type, 3 more }

A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.

content: Array<Content>

The content of the message.

text?: string

The text content.

type?: "input_text"

The content type. Always input_text for system messages.

role: "system"

The role of the message sender. Always system.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemUserMessage { content, role, type, 3 more }

A user message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes (for input_audio), these will be parsed as the format specified in the session input audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

detail?: "auto" | "low" | "high"

The detail level of the image (for input_image). auto will default to high.

One of the following:
"auto"
"low"
"high"
image_url?: string

Base64-encoded image bytes (for input_image) as a data URI. For example data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA.... Supported formats are PNG and JPEG.

text?: string

The text content (for input_text).

transcript?: string

Transcript of the audio (for input_audio). This is not sent to the model, but will be attached to the message item for reference.

type?: "input_text" | "input_audio" | "input_image"

The content type (input_text, input_audio, or input_image).

One of the following:
"input_text"
"input_audio"
"input_image"
role: "user"

The role of the message sender. Always user.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemAssistantMessage { content, role, type, 3 more }

An assistant message item in a Realtime conversation.

content: Array<Content>

The content of the message.

audio?: string

Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.

text?: string

The text content.

transcript?: string

The transcript of the audio content, this will always be present if the output type is audio.

type?: "output_text" | "output_audio"

The content type, output_text or output_audio depending on the session output_modalities configuration.

One of the following:
"output_text"
"output_audio"
role: "assistant"

The role of the message sender. Always assistant.

type: "message"

The type of the item. Always message.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCall { arguments, name, type, 4 more }

A function call item in a Realtime conversation.

arguments: string

The arguments of the function call. This is a JSON-encoded string representing the arguments passed to the function, for example {"arg1": "value1", "arg2": 42}.

name: string

The name of the function being called.

type: "function_call"

The type of the item. Always function_call.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

call_id?: string

The ID of the function call.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeConversationItemFunctionCallOutput { call_id, output, type, 3 more }

A function call output item in a Realtime conversation.

call_id: string

The ID of the function call this output is for.

output: string

The output of the function call, this is free text and can contain any information or simply be empty.

type: "function_call_output"

The type of the item. Always function_call_output.

id?: string

The unique ID of the item. This may be provided by the client or generated by the server.

object?: "realtime.item"

Identifier for the API object being returned - always realtime.item. Optional when creating a new item.

status?: "completed" | "incomplete" | "in_progress"

The status of the item. Has no effect on the conversation.

One of the following:
"completed"
"incomplete"
"in_progress"
RealtimeMcpApprovalResponse { id, approval_request_id, approve, 2 more }

A Realtime item responding to an MCP approval request.

id: string

The unique ID of the approval response.

approval_request_id: string

The ID of the approval request being answered.

approve: boolean

Whether the request was approved.

type: "mcp_approval_response"

The type of the item. Always mcp_approval_response.

reason?: string | null

Optional reason for the decision.

RealtimeMcpListTools { server_label, tools, type, id }

A Realtime item listing tools available on an MCP server.

server_label: string

The label of the MCP server.

tools: Array<Tool>

The tools available on the server.

input_schema: unknown

The JSON schema describing the tool's input.

name: string

The name of the tool.

annotations?: unknown

Additional annotations about the tool.

description?: string | null

The description of the tool.

type: "mcp_list_tools"

The type of the item. Always mcp_list_tools.

id?: string

The unique ID of the list.

RealtimeMcpToolCall { id, arguments, name, 5 more }

A Realtime item representing an invocation of a tool on an MCP server.

id: string

The unique ID of the tool call.

arguments: string

A JSON string of the arguments passed to the tool.

name: string

The name of the tool that was run.

server_label: string

The label of the MCP server running the tool.

type: "mcp_call"

The type of the item. Always mcp_call.

approval_request_id?: string | null

The ID of an associated approval request, if any.

error?: RealtimeMcpProtocolError { code, message, type } | RealtimeMcpToolExecutionError { message, type } | RealtimeMcphttpError { code, message, type } | null

The error from the tool call, if any.

One of the following:
RealtimeMcpProtocolError { code, message, type }
code: number
message: string
type: "protocol_error"
RealtimeMcpToolExecutionError { message, type }
message: string
type: "tool_execution_error"
RealtimeMcphttpError { code, message, type }
code: number
message: string
type: "http_error"
output?: string | null

The output from the tool call.

RealtimeMcpApprovalRequest { id, arguments, name, 2 more }

A Realtime item requesting human approval of a tool invocation.

id: string

The unique ID of the approval request.

arguments: string

A JSON string of arguments for the tool.

name: string

The name of the tool to run.

server_label: string

The label of the MCP server making the request.

type: "mcp_approval_request"

The type of the item. Always mcp_approval_request.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
metadata?: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

output_modalities?: Array<"text" | "audio">

The set of modalities the model used to respond, currently the only possible values are [\"audio\"], [\"text\"]. Audio output always include a text transcript. Setting the output to mode text will disable audio output from the model.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

tool_choice?: ToolChoiceOptions | ToolChoiceFunction { name, type } | ToolChoiceMcp { server_label, type, name }

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

One of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

One of the following:
"none"
"auto"
"required"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

tools?: Array<RealtimeFunctionTool { description, name, parameters, type } | RealtimeResponseCreateMcpTool { server_label, type, allowed_tools, 7 more } >

Tools available to the model.

One of the following:
RealtimeFunctionTool { description, name, parameters, type }
description?: string

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: string

The name of the function.

parameters?: unknown

Parameters of the function in JSON Schema.

type?: "function"

The type of the tool, i.e. function.

RealtimeResponseCreateMcpTool { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

One of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
One of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
defer_loading?: boolean

Whether this MCP tool is deferred and discovered via tool search.

headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

One of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

RealtimeResponseStatus { error, reason, type }

Additional details about the status.

error?: Error { code, type }

A description of the error that caused the response to fail, populated when the status is failed.

code?: string

Error code, if any.

type?: string

The type of error.

reason?: "turn_detected" | "client_cancelled" | "max_output_tokens" | "content_filter"

The reason the Response did not complete. For a cancelled Response, one of turn_detected (the server VAD detected a new start of speech) or client_cancelled (the client sent a cancel event). For an incomplete Response, one of max_output_tokens or content_filter (the server-side safety filter activated and cut off the response).

One of the following:
"turn_detected"
"client_cancelled"
"max_output_tokens"
"content_filter"
type?: "completed" | "cancelled" | "incomplete" | "failed"

The type of error that caused the response to fail, corresponding with the status field (completed, cancelled, incomplete, failed).

One of the following:
"completed"
"cancelled"
"incomplete"
"failed"
RealtimeResponseUsage { input_token_details, input_tokens, output_token_details, 2 more }

Usage statistics for the Response, this will correspond to billing. A Realtime API session will maintain a conversation context and append new Items to the Conversation, thus output from previous turns (text and audio tokens) will become the input for later turns.

input_token_details?: RealtimeResponseUsageInputTokenDetails { audio_tokens, cached_tokens, cached_tokens_details, 2 more }

Details about the input tokens used in the Response. Cached tokens are tokens from previous turns in the conversation that are included as context for the current response. Cached tokens here are counted as a subset of input tokens, meaning input tokens will include cached and uncached tokens.

input_tokens?: number

The number of input tokens used in the Response, including text and audio tokens.

output_token_details?: RealtimeResponseUsageOutputTokenDetails { audio_tokens, text_tokens }

Details about the output tokens used in the Response.

output_tokens?: number

The number of output tokens sent in the Response, including text and audio tokens.

total_tokens?: number

The total number of tokens in the Response including input and output text and audio tokens.

RealtimeResponseUsageInputTokenDetails { audio_tokens, cached_tokens, cached_tokens_details, 2 more }

Details about the input tokens used in the Response. Cached tokens are tokens from previous turns in the conversation that are included as context for the current response. Cached tokens here are counted as a subset of input tokens, meaning input tokens will include cached and uncached tokens.

audio_tokens?: number

The number of audio tokens used as input for the Response.

cached_tokens?: number

The number of cached tokens used as input for the Response.

cached_tokens_details?: CachedTokensDetails { audio_tokens, image_tokens, text_tokens }

Details about the cached tokens used as input for the Response.

audio_tokens?: number

The number of cached audio tokens used as input for the Response.

image_tokens?: number

The number of cached image tokens used as input for the Response.

text_tokens?: number

The number of cached text tokens used as input for the Response.

image_tokens?: number

The number of image tokens used as input for the Response.

text_tokens?: number

The number of text tokens used as input for the Response.

RealtimeResponseUsageOutputTokenDetails { audio_tokens, text_tokens }

Details about the output tokens used in the Response.

audio_tokens?: number

The number of audio tokens used in the Response.

text_tokens?: number

The number of text tokens used in the Response.

RealtimeServerEvent = ConversationCreatedEvent { conversation, event_id, type } | ConversationItemCreatedEvent { event_id, item, type, previous_item_id } | ConversationItemDeletedEvent { event_id, item_id, type } | 43 more

A realtime server event.

One of the following:
ConversationCreatedEvent { conversation, event_id, type }

Returned when a conversation is created. Emitted right after session creation.

conversation: Conversation { id, object }

The conversation resource.

id?: string

The unique ID of the conversation.

object?: "realtime.conversation"

The object type, must be realtime.conversation.

event_id: string

The unique ID of the server event.

type: "conversation.created"

The event type, must be conversation.created.

ConversationItemCreatedEvent { event_id, item, type, previous_item_id }

Returned when a conversation item is created. There are several scenarios that produce this event:

  • The server is generating a Response, which if successful will produce either one or two Items, which will be of type message (role assistant) or type function_call.
  • The input audio buffer has been committed, either by the client or the server (in server_vad mode). The server will take the content of the input audio buffer and add it to a new user message Item.
  • The client has sent a conversation.item.create event to add a new Item to the Conversation.
event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

type: "conversation.item.created"

The event type, must be conversation.item.created.

previous_item_id?: string | null

The ID of the preceding item in the Conversation context, allows the client to understand the order of the conversation. Can be null if the item has no predecessor.

ConversationItemDeletedEvent { event_id, item_id, type }

Returned when an item in the conversation is deleted by the client with a conversation.item.delete event. This event is used to synchronize the server's understanding of the conversation history with the client's view.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item that was deleted.

type: "conversation.item.deleted"

The event type, must be conversation.item.deleted.

ConversationItemInputAudioTranscriptionCompletedEvent { content_index, event_id, item_id, 4 more }

This event is the output of audio transcription for user audio written to the user audio buffer. Transcription begins when the input audio buffer is committed by the client or server (when VAD is enabled). Transcription runs asynchronously with Response creation, so this event may come before or after the Response events.

Realtime API models accept audio natively, and thus input transcription is a separate process run on a separate ASR (Automatic Speech Recognition) model. The transcript may diverge somewhat from the model's interpretation, and should be treated as a rough guide.

content_index: number

The index of the content part containing the audio.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item containing the audio that is being transcribed.

transcript: string

The transcribed text.

type: "conversation.item.input_audio_transcription.completed"

The event type, must be conversation.item.input_audio_transcription.completed.

usage: TranscriptTextUsageTokens { input_tokens, output_tokens, total_tokens, 2 more } | TranscriptTextUsageDuration { seconds, type }

Usage statistics for the transcription, this is billed according to the ASR model's pricing rather than the realtime model's pricing.

One of the following:
TranscriptTextUsageTokens { input_tokens, output_tokens, total_tokens, 2 more }

Usage statistics for models billed by token usage.

input_tokens: number

Number of input tokens billed for this request.

output_tokens: number

Number of output tokens generated.

total_tokens: number

Total number of tokens used (input + output).

type: "tokens"

The type of the usage object. Always tokens for this variant.

input_token_details?: InputTokenDetails { audio_tokens, text_tokens }

Details about the input tokens billed for this request.

audio_tokens?: number

Number of audio tokens billed for this request.

text_tokens?: number

Number of text tokens billed for this request.

TranscriptTextUsageDuration { seconds, type }

Usage statistics for models billed by audio input duration.

seconds: number

Duration of the input audio in seconds.

type: "duration"

The type of the usage object. Always duration for this variant.

logprobs?: Array<LogProbProperties { token, bytes, logprob } > | null

The log probabilities of the transcription.

token: string

The token that was used to generate the log probability.

bytes: Array<number>

The bytes that were used to generate the log probability.

logprob: number

The log probability of the token.

ConversationItemInputAudioTranscriptionDeltaEvent { event_id, item_id, type, 3 more }

Returned when the text value of an input audio transcription content part is updated with incremental transcription results.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item containing the audio that is being transcribed.

type: "conversation.item.input_audio_transcription.delta"

The event type, must be conversation.item.input_audio_transcription.delta.

content_index?: number

The index of the content part in the item's content array.

delta?: string

The text delta.

logprobs?: Array<LogProbProperties { token, bytes, logprob } > | null

The log probabilities of the transcription. These can be enabled by configurating the session with "include": ["item.input_audio_transcription.logprobs"]. Each entry in the array corresponds a log probability of which token would be selected for this chunk of transcription. This can help to identify if it was possible there were multiple valid options for a given chunk of transcription.

token: string

The token that was used to generate the log probability.

bytes: Array<number>

The bytes that were used to generate the log probability.

logprob: number

The log probability of the token.

ConversationItemInputAudioTranscriptionFailedEvent { content_index, error, event_id, 2 more }

Returned when input audio transcription is configured, and a transcription request for a user message failed. These events are separate from other error events so that the client can identify the related Item.

content_index: number

The index of the content part containing the audio.

error: Error { code, message, param, type }

Details of the transcription error.

code?: string

Error code, if any.

message?: string

A human-readable error message.

param?: string

Parameter related to the error, if any.

type?: string

The type of error.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item.

type: "conversation.item.input_audio_transcription.failed"

The event type, must be conversation.item.input_audio_transcription.failed.

ConversationItemRetrieved { event_id, item, type }

Returned when a conversation item is retrieved with conversation.item.retrieve. This is provided as a way to fetch the server's representation of an item, for example to get access to the post-processed audio data after noise cancellation and VAD. It includes the full content of the Item, including audio data.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

type: "conversation.item.retrieved"

The event type, must be conversation.item.retrieved.

ConversationItemTruncatedEvent { audio_end_ms, content_index, event_id, 2 more }

Returned when an earlier assistant audio message item is truncated by the client with a conversation.item.truncate event. This event is used to synchronize the server's understanding of the audio with the client's playback.

This action will truncate the audio and remove the server-side text transcript to ensure there is no text in the context that hasn't been heard by the user.

audio_end_ms: number

The duration up to which the audio was truncated, in milliseconds.

content_index: number

The index of the content part that was truncated.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the assistant message item that was truncated.

type: "conversation.item.truncated"

The event type, must be conversation.item.truncated.

RealtimeErrorEvent { error, event_id, type }

Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.

error: RealtimeError { message, type, code, 2 more }

Details of the error.

event_id: string

The unique ID of the server event.

type: "error"

The event type, must be error.

InputAudioBufferClearedEvent { event_id, type }

Returned when the input audio buffer is cleared by the client with a input_audio_buffer.clear event.

event_id: string

The unique ID of the server event.

type: "input_audio_buffer.cleared"

The event type, must be input_audio_buffer.cleared.

InputAudioBufferCommittedEvent { event_id, item_id, type, previous_item_id }

Returned when an input audio buffer is committed, either by the client or automatically in server VAD mode. The item_id property is the ID of the user message item that will be created, thus a conversation.item.created event will also be sent to the client.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item that will be created.

type: "input_audio_buffer.committed"

The event type, must be input_audio_buffer.committed.

previous_item_id?: string | null

The ID of the preceding item after which the new item will be inserted. Can be null if the item has no predecessor.

InputAudioBufferDtmfEventReceivedEvent { event, received_at, type }

SIP Only: Returned when an DTMF event is received. A DTMF event is a message that represents a telephone keypad press (0–9, *, #, A–D). The event property is the keypad that the user press. The received_at is the UTC Unix Timestamp that the server received the event.

event: string

The telephone keypad that was pressed by the user.

received_at: number

UTC Unix Timestamp when DTMF Event was received by server.

type: "input_audio_buffer.dtmf_event_received"

The event type, must be input_audio_buffer.dtmf_event_received.

InputAudioBufferSpeechStartedEvent { audio_start_ms, event_id, item_id, type }

Sent by the server when in server_vad mode to indicate that speech has been detected in the audio buffer. This can happen any time audio is added to the buffer (unless speech is already detected). The client may want to use this event to interrupt audio playback or provide visual feedback to the user.

The client should expect to receive a input_audio_buffer.speech_stopped event when speech stops. The item_id property is the ID of the user message item that will be created when speech stops and will also be included in the input_audio_buffer.speech_stopped event (unless the client manually commits the audio buffer during VAD activation).

audio_start_ms: number

Milliseconds from the start of all audio written to the buffer during the session when speech was first detected. This will correspond to the beginning of audio sent to the model, and thus includes the prefix_padding_ms configured in the Session.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item that will be created when speech stops.

type: "input_audio_buffer.speech_started"

The event type, must be input_audio_buffer.speech_started.

InputAudioBufferSpeechStoppedEvent { audio_end_ms, event_id, item_id, type }

Returned in server_vad mode when the server detects the end of speech in the audio buffer. The server will also send an conversation.item.created event with the user message item that is created from the audio buffer.

audio_end_ms: number

Milliseconds since the session started when speech stopped. This will correspond to the end of audio sent to the model, and thus includes the min_silence_duration_ms configured in the Session.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the user message item that will be created.

type: "input_audio_buffer.speech_stopped"

The event type, must be input_audio_buffer.speech_stopped.

RateLimitsUpdatedEvent { event_id, rate_limits, type }

Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be "reserved" for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.

event_id: string

The unique ID of the server event.

rate_limits: Array<RateLimit>

List of rate limit information.

limit?: number

The maximum allowed value for the rate limit.

name?: "requests" | "tokens"

The name of the rate limit (requests, tokens).

One of the following:
"requests"
"tokens"
remaining?: number

The remaining value before the limit is reached.

reset_seconds?: number

Seconds until the rate limit resets.

type: "rate_limits.updated"

The event type, must be rate_limits.updated.

ResponseAudioDeltaEvent { content_index, delta, event_id, 4 more }

Returned when the model-generated audio is updated.

content_index: number

The index of the content part in the item's content array.

delta: string

Base64-encoded audio data delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_audio.delta"

The event type, must be response.output_audio.delta.

ResponseAudioDoneEvent { content_index, event_id, item_id, 3 more }

Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_audio.done"

The event type, must be response.output_audio.done.

ResponseAudioTranscriptDeltaEvent { content_index, delta, event_id, 4 more }

Returned when the model-generated transcription of audio output is updated.

content_index: number

The index of the content part in the item's content array.

delta: string

The transcript delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_audio_transcript.delta"

The event type, must be response.output_audio_transcript.delta.

ResponseAudioTranscriptDoneEvent { content_index, event_id, item_id, 4 more }

Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

transcript: string

The final transcript of the audio.

type: "response.output_audio_transcript.done"

The event type, must be response.output_audio_transcript.done.

ResponseContentPartAddedEvent { content_index, event_id, item_id, 4 more }

Returned when a new content part is added to an assistant message item during response generation.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item to which the content part was added.

output_index: number

The index of the output item in the response.

part: Part { audio, text, transcript, type }

The content part that was added.

audio?: string

Base64-encoded audio data (if type is "audio").

text?: string

The text content (if type is "text").

transcript?: string

The transcript of the audio (if type is "audio").

type?: "text" | "audio"

The content type ("text", "audio").

One of the following:
"text"
"audio"
response_id: string

The ID of the response.

type: "response.content_part.added"

The event type, must be response.content_part.added.

ResponseContentPartDoneEvent { content_index, event_id, item_id, 4 more }

Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

part: Part { audio, text, transcript, type }

The content part that is done.

audio?: string

Base64-encoded audio data (if type is "audio").

text?: string

The text content (if type is "text").

transcript?: string

The transcript of the audio (if type is "audio").

type?: "text" | "audio"

The content type ("text", "audio").

One of the following:
"text"
"audio"
response_id: string

The ID of the response.

type: "response.content_part.done"

The event type, must be response.content_part.done.

ResponseCreatedEvent { event_id, response, type }

Returned when a new Response is created. The first event of response creation, where the response is in an initial state of in_progress.

event_id: string

The unique ID of the server event.

response: RealtimeResponse { id, audio, conversation_id, 8 more }

The response resource.

type: "response.created"

The event type, must be response.created.

ResponseDoneEvent { event_id, response, type }

Returned when a Response is done streaming. Always emitted, no matter the final state. The Response object included in the response.done event will include all output Items in the Response but will omit the raw audio data.

Clients should check the status field of the Response to determine if it was successful (completed) or if there was another outcome: cancelled, failed, or incomplete.

A response will contain all output items that were generated during the response, excluding any audio content.

event_id: string

The unique ID of the server event.

response: RealtimeResponse { id, audio, conversation_id, 8 more }

The response resource.

type: "response.done"

The event type, must be response.done.

ResponseFunctionCallArgumentsDeltaEvent { call_id, delta, event_id, 4 more }

Returned when the model-generated function call arguments are updated.

call_id: string

The ID of the function call.

delta: string

The arguments delta as a JSON string.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the function call item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.function_call_arguments.delta"

The event type, must be response.function_call_arguments.delta.

ResponseFunctionCallArgumentsDoneEvent { arguments, call_id, event_id, 5 more }

Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

arguments: string

The final arguments as a JSON string.

call_id: string

The ID of the function call.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the function call item.

name: string

The name of the function that was called.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.function_call_arguments.done"

The event type, must be response.function_call_arguments.done.

ResponseOutputItemAddedEvent { event_id, item, output_index, 2 more }

Returned when a new Item is created during Response generation.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

output_index: number

The index of the output item in the Response.

response_id: string

The ID of the Response to which the item belongs.

type: "response.output_item.added"

The event type, must be response.output_item.added.

ResponseOutputItemDoneEvent { event_id, item, output_index, 2 more }

Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

output_index: number

The index of the output item in the Response.

response_id: string

The ID of the Response to which the item belongs.

type: "response.output_item.done"

The event type, must be response.output_item.done.

ResponseTextDeltaEvent { content_index, delta, event_id, 4 more }

Returned when the text value of an "output_text" content part is updated.

content_index: number

The index of the content part in the item's content array.

delta: string

The text delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_text.delta"

The event type, must be response.output_text.delta.

ResponseTextDoneEvent { content_index, event_id, item_id, 4 more }

Returned when the text value of an "output_text" content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

text: string

The final text content.

type: "response.output_text.done"

The event type, must be response.output_text.done.

SessionCreatedEvent { event_id, session, type }

Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.

event_id: string

The unique ID of the server event.

session: RealtimeSessionCreateRequest { type, audio, include, 9 more } | RealtimeTranscriptionSessionCreateRequest { type, audio, include }

The session configuration.

One of the following:
RealtimeSessionCreateRequest { type, audio, include, 9 more }

Realtime session object configuration.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: RealtimeAudioConfig { input, output }

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: RealtimeToolsConfig { , }

Tools available to the model.

tracing?: RealtimeTracingConfig | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Realtime transcription session object configuration.

type: "transcription"

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

type: "session.created"

The event type, must be session.created.

SessionUpdatedEvent { event_id, session, type }

Returned when a session is updated with a session.update event, unless there is an error.

event_id: string

The unique ID of the server event.

session: RealtimeSessionCreateRequest { type, audio, include, 9 more } | RealtimeTranscriptionSessionCreateRequest { type, audio, include }

The session configuration.

One of the following:
RealtimeSessionCreateRequest { type, audio, include, 9 more }

Realtime session object configuration.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: RealtimeAudioConfig { input, output }

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: RealtimeToolsConfig { , }

Tools available to the model.

tracing?: RealtimeTracingConfig | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Realtime transcription session object configuration.

type: "transcription"

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

type: "session.updated"

The event type, must be session.updated.

OutputAudioBufferStarted { event_id, response_id, type }

WebRTC/SIP Only: Emitted when the server begins streaming audio to the client. This event is emitted after an audio content part has been added (response.content_part.added) to the response. Learn more.

event_id: string

The unique ID of the server event.

response_id: string

The unique ID of the response that produced the audio.

type: "output_audio_buffer.started"

The event type, must be output_audio_buffer.started.

OutputAudioBufferStopped { event_id, response_id, type }

WebRTC/SIP Only: Emitted when the output audio buffer has been completely drained on the server, and no more audio is forthcoming. This event is emitted after the full response data has been sent to the client (response.done). Learn more.

event_id: string

The unique ID of the server event.

response_id: string

The unique ID of the response that produced the audio.

type: "output_audio_buffer.stopped"

The event type, must be output_audio_buffer.stopped.

OutputAudioBufferCleared { event_id, response_id, type }

WebRTC/SIP Only: Emitted when the output audio buffer is cleared. This happens either in VAD mode when the user has interrupted (input_audio_buffer.speech_started), or when the client has emitted the output_audio_buffer.clear event to manually cut off the current audio response. Learn more.

event_id: string

The unique ID of the server event.

response_id: string

The unique ID of the response that produced the audio.

type: "output_audio_buffer.cleared"

The event type, must be output_audio_buffer.cleared.

ConversationItemAdded { event_id, item, type, previous_item_id }

Sent by the server when an Item is added to the default Conversation. This can happen in several cases:

  • When the client sends a conversation.item.create event.
  • When the input audio buffer is committed. In this case the item will be a user message containing the audio from the buffer.
  • When the model is generating a Response. In this case the conversation.item.added event will be sent when the model starts generating a specific Item, and thus it will not yet have any content (and status will be in_progress).

The event will include the full content of the Item (except when model is generating a Response) except for audio data, which can be retrieved separately with a conversation.item.retrieve event if necessary.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

type: "conversation.item.added"

The event type, must be conversation.item.added.

previous_item_id?: string | null

The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.

ConversationItemDone { event_id, item, type, previous_item_id }

Returned when a conversation item is finalized.

The event will include the full content of the Item except for audio data, which can be retrieved separately with a conversation.item.retrieve event if needed.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

type: "conversation.item.done"

The event type, must be conversation.item.done.

previous_item_id?: string | null

The ID of the item that precedes this one, if any. This is used to maintain ordering when items are inserted.

InputAudioBufferTimeoutTriggered { audio_end_ms, audio_start_ms, event_id, 2 more }

Returned when the Server VAD timeout is triggered for the input audio buffer. This is configured with idle_timeout_ms in the turn_detection settings of the session, and it indicates that there hasn't been any speech detected for the configured duration.

The audio_start_ms and audio_end_ms fields indicate the segment of audio after the last model response up to the triggering time, as an offset from the beginning of audio written to the input audio buffer. This means it demarcates the segment of audio that was silent and the difference between the start and end values will roughly match the configured timeout.

The empty audio will be committed to the conversation as an input_audio item (there will be a input_audio_buffer.committed event) and a model response will be generated. There may be speech that didn't trigger VAD but is still detected by the model, so the model may respond with something relevant to the conversation or a prompt to continue speaking.

audio_end_ms: number

Millisecond offset of audio written to the input audio buffer at the time the timeout was triggered.

audio_start_ms: number

Millisecond offset of audio written to the input audio buffer that was after the playback time of the last model response.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item associated with this segment.

type: "input_audio_buffer.timeout_triggered"

The event type, must be input_audio_buffer.timeout_triggered.

ConversationItemInputAudioTranscriptionSegment { id, content_index, end, 6 more }

Returned when an input audio transcription segment is identified for an item.

id: string

The segment identifier.

content_index: number

The index of the input audio content part within the item.

end: number

End time of the segment in seconds.

formatfloat
event_id: string

The unique ID of the server event.

item_id: string

The ID of the item containing the input audio content.

speaker: string

The detected speaker label for this segment.

start: number

Start time of the segment in seconds.

formatfloat
text: string

The text for this segment.

type: "conversation.item.input_audio_transcription.segment"

The event type, must be conversation.item.input_audio_transcription.segment.

McpListToolsInProgress { event_id, item_id, type }

Returned when listing MCP tools is in progress for an item.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP list tools item.

type: "mcp_list_tools.in_progress"

The event type, must be mcp_list_tools.in_progress.

McpListToolsCompleted { event_id, item_id, type }

Returned when listing MCP tools has completed for an item.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP list tools item.

type: "mcp_list_tools.completed"

The event type, must be mcp_list_tools.completed.

McpListToolsFailed { event_id, item_id, type }

Returned when listing MCP tools has failed for an item.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP list tools item.

type: "mcp_list_tools.failed"

The event type, must be mcp_list_tools.failed.

ResponseMcpCallArgumentsDelta { delta, event_id, item_id, 4 more }

Returned when MCP tool call arguments are updated during response generation.

delta: string

The JSON-encoded arguments delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.mcp_call_arguments.delta"

The event type, must be response.mcp_call_arguments.delta.

obfuscation?: string | null

If present, indicates the delta text was obfuscated.

ResponseMcpCallArgumentsDone { arguments, event_id, item_id, 3 more }

Returned when MCP tool call arguments are finalized during response generation.

arguments: string

The final JSON-encoded arguments string.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.mcp_call_arguments.done"

The event type, must be response.mcp_call_arguments.done.

ResponseMcpCallInProgress { event_id, item_id, output_index, type }

Returned when an MCP tool call has started and is in progress.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

type: "response.mcp_call.in_progress"

The event type, must be response.mcp_call.in_progress.

ResponseMcpCallCompleted { event_id, item_id, output_index, type }

Returned when an MCP tool call has completed successfully.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

type: "response.mcp_call.completed"

The event type, must be response.mcp_call.completed.

ResponseMcpCallFailed { event_id, item_id, output_index, type }

Returned when an MCP tool call has failed.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

type: "response.mcp_call.failed"

The event type, must be response.mcp_call.failed.

RealtimeSession { id, expires_at, include, 17 more }

Realtime session object for the beta interface.

id?: string

Unique identifier for the session that looks like sess_1234567890abcdef.

expires_at?: number

Expiration timestamp for the session, in seconds since epoch.

include?: Array<"item.input_audio_transcription.logprobs"> | null

Additional fields to include in server outputs.

  • item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
input_audio_format?: "pcm16" | "g711_ulaw" | "g711_alaw"

The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.

One of the following:
"pcm16"
"g711_ulaw"
"g711_alaw"
input_audio_noise_reduction?: InputAudioNoiseReduction { type }

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

input_audio_transcription?: AudioTranscription { language, model, prompt } | null

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_response_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. To disable audio, set this to ["text"].

One of the following:
"text"
"audio"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
object?: "realtime.session"

The object type. Always realtime.session.

output_audio_format?: "pcm16" | "g711_ulaw" | "g711_alaw"

The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, output audio is sampled at a rate of 24kHz.

One of the following:
"pcm16"
"g711_ulaw"
"g711_alaw"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

speed?: number

The speed of the model's spoken response. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

maximum1.5
minimum0.25
temperature?: number

Sampling temperature for the model, limited to [0.6, 1.2]. For audio models a temperature of 0.8 is highly recommended for best performance.

tool_choice?: string

How the model chooses tools. Options are auto, none, required, or specify a function.

tools?: Array<RealtimeFunctionTool { description, name, parameters, type } >

Tools (functions) available to the model.

description?: string

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: string

The name of the function.

parameters?: unknown

Parameters of the function in JSON Schema.

type?: "function"

The type of the tool, i.e. function.

tracing?: "auto" | TracingConfiguration { group_id, metadata, workflow_name } | null

Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

One of the following:
"auto"
"auto"
TracingConfiguration { group_id, metadata, workflow_name }

Granular configuration for tracing.

group_id?: string

The group id to attach to this trace to enable filtering and grouping in the traces dashboard.

metadata?: unknown

The arbitrary metadata to attach to this trace to enable filtering in the traces dashboard.

workflow_name?: string

The name of the workflow to attach to this trace. This is used to name the trace in the traces dashboard.

turn_detection?: ServerVad { type, create_response, idle_timeout_ms, 4 more } | SemanticVad { type, create_response, eagerness, interrupt_response } | null

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

One of the following:
ServerVad { type, create_response, idle_timeout_ms, 4 more }

Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.

type: "server_vad"

Type of turn detection, server_vad to turn on simple Server VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

idle_timeout_ms?: number | null

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response's audio has finished playing, i.e. it's set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

minimum5000
maximum30000
interrupt_response?: boolean

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

prefix_padding_ms?: number

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: number

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: number

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

SemanticVad { type, create_response, eagerness, interrupt_response }

Server-side semantic turn detection which uses a model to determine when the user has finished speaking.

type: "semantic_vad"

Type of turn detection, semantic_vad to turn on Semantic VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs.

eagerness?: "low" | "medium" | "high" | "auto"

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

One of the following:
"low"
"medium"
"high"
"auto"
interrupt_response?: boolean

Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

voice?: (string & {}) | "alloy" | "ash" | "ballad" | 7 more

The voice the model uses to respond. Voice cannot be changed during the session once the model has responded with audio at least once. Current voice options are alloy, ash, ballad, coral, echo, sage, shimmer, and verse.

One of the following:
(string & {})
"alloy" | "ash" | "ballad" | 7 more
"alloy"
"ash"
"ballad"
"coral"
"echo"
"sage"
"shimmer"
"verse"
"marin"
"cedar"
RealtimeSessionCreateRequest { type, audio, include, 9 more }

Realtime session object configuration.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: RealtimeAudioConfig { input, output }

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: RealtimeToolsConfig { , }

Tools available to the model.

tracing?: RealtimeTracingConfig | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeToolChoiceConfig = ToolChoiceOptions | ToolChoiceFunction { name, type } | ToolChoiceMcp { server_label, type, name }

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

One of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

One of the following:
"none"
"auto"
"required"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

RealtimeToolsConfig = Array<RealtimeToolsConfigUnion>

Tools available to the model.

One of the following:
RealtimeFunctionTool { description, name, parameters, type }
description?: string

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: string

The name of the function.

parameters?: unknown

Parameters of the function in JSON Schema.

type?: "function"

The type of the tool, i.e. function.

Mcp { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

One of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
One of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
defer_loading?: boolean

Whether this MCP tool is deferred and discovered via tool search.

headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

One of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

RealtimeToolsConfigUnion = RealtimeFunctionTool { description, name, parameters, type } | Mcp { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

One of the following:
RealtimeFunctionTool { description, name, parameters, type }
description?: string

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: string

The name of the function.

parameters?: unknown

Parameters of the function in JSON Schema.

type?: "function"

The type of the tool, i.e. function.

Mcp { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

One of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
One of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
defer_loading?: boolean

Whether this MCP tool is deferred and discovered via tool search.

headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

One of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

RealtimeTracingConfig = "auto" | TracingConfiguration { group_id, metadata, workflow_name } | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

One of the following:
"auto"
"auto"
TracingConfiguration { group_id, metadata, workflow_name }

Granular configuration for tracing.

group_id?: string

The group id to attach to this trace to enable filtering and grouping in the Traces Dashboard.

metadata?: unknown

The arbitrary metadata to attach to this trace to enable filtering in the Traces Dashboard.

workflow_name?: string

The name of the workflow to attach to this trace. This is used to name the trace in the Traces Dashboard.

RealtimeTranscriptionSessionAudio { input }

Configuration for input and output audio.

input?: RealtimeTranscriptionSessionAudioInput { format, noise_reduction, transcription, turn_detection }
RealtimeTranscriptionSessionAudioInput { format, noise_reduction, transcription, turn_detection }

The PCM audio format. Only a 24kHz sample rate is supported.

noise_reduction?: NoiseReduction { type }

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

transcription?: AudioTranscription { language, model, prompt }

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

RealtimeTranscriptionSessionAudioInputTurnDetection = ServerVad { type, create_response, idle_timeout_ms, 4 more } | SemanticVad { type, create_response, eagerness, interrupt_response } | null

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

One of the following:
ServerVad { type, create_response, idle_timeout_ms, 4 more }

Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.

type: "server_vad"

Type of turn detection, server_vad to turn on simple Server VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

idle_timeout_ms?: number | null

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response's audio has finished playing, i.e. it's set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

minimum5000
maximum30000
interrupt_response?: boolean

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

prefix_padding_ms?: number

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: number

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: number

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

SemanticVad { type, create_response, eagerness, interrupt_response }

Server-side semantic turn detection which uses a model to determine when the user has finished speaking.

type: "semantic_vad"

Type of turn detection, semantic_vad to turn on Semantic VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs.

eagerness?: "low" | "medium" | "high" | "auto"

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

One of the following:
"low"
"medium"
"high"
"auto"
interrupt_response?: boolean

Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Realtime transcription session object configuration.

type: "transcription"

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

RealtimeTruncation = "auto" | "disabled" | RealtimeTruncationRetentionRatio { retention_ratio, type, token_limits }

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

One of the following:
"auto" | "disabled"
"auto"
"disabled"
RealtimeTruncationRetentionRatio { retention_ratio, type, token_limits }

Retain a fraction of the conversation tokens when the conversation exceeds the input token limit. This allows you to amortize truncations across multiple turns, which can help improve cached token usage.

retention_ratio: number

Fraction of post-instruction conversation tokens to retain (0.0 - 1.0) when the conversation exceeds the input token limit. Setting this to 0.8 means that messages will be dropped until 80% of the maximum allowed tokens are used. This helps reduce the frequency of truncations and improve cache rates.

minimum0
maximum1
type: "retention_ratio"

Use retention ratio truncation.

token_limits?: TokenLimits { post_instructions }

Optional custom token limits for this truncation strategy. If not provided, the model's default token limits will be used.

post_instructions?: number

Maximum tokens allowed in the conversation after instructions (which including tool definitions). For example, setting this to 5,000 would mean that truncation would occur when the conversation exceeds 5,000 tokens after instructions. This cannot be higher than the model's context window size minus the maximum output tokens.

minimum0
RealtimeTruncationRetentionRatio { retention_ratio, type, token_limits }

Retain a fraction of the conversation tokens when the conversation exceeds the input token limit. This allows you to amortize truncations across multiple turns, which can help improve cached token usage.

retention_ratio: number

Fraction of post-instruction conversation tokens to retain (0.0 - 1.0) when the conversation exceeds the input token limit. Setting this to 0.8 means that messages will be dropped until 80% of the maximum allowed tokens are used. This helps reduce the frequency of truncations and improve cache rates.

minimum0
maximum1
type: "retention_ratio"

Use retention ratio truncation.

token_limits?: TokenLimits { post_instructions }

Optional custom token limits for this truncation strategy. If not provided, the model's default token limits will be used.

post_instructions?: number

Maximum tokens allowed in the conversation after instructions (which including tool definitions). For example, setting this to 5,000 would mean that truncation would occur when the conversation exceeds 5,000 tokens after instructions. This cannot be higher than the model's context window size minus the maximum output tokens.

minimum0
ResponseAudioDeltaEvent { content_index, delta, event_id, 4 more }

Returned when the model-generated audio is updated.

content_index: number

The index of the content part in the item's content array.

delta: string

Base64-encoded audio data delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_audio.delta"

The event type, must be response.output_audio.delta.

ResponseAudioDoneEvent { content_index, event_id, item_id, 3 more }

Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_audio.done"

The event type, must be response.output_audio.done.

ResponseAudioTranscriptDeltaEvent { content_index, delta, event_id, 4 more }

Returned when the model-generated transcription of audio output is updated.

content_index: number

The index of the content part in the item's content array.

delta: string

The transcript delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_audio_transcript.delta"

The event type, must be response.output_audio_transcript.delta.

ResponseAudioTranscriptDoneEvent { content_index, event_id, item_id, 4 more }

Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

transcript: string

The final transcript of the audio.

type: "response.output_audio_transcript.done"

The event type, must be response.output_audio_transcript.done.

ResponseCancelEvent { type, event_id, response_id }

Send this event to cancel an in-progress response. The server will respond with a response.done event with a status of response.status=cancelled. If there is no response to cancel, the server will respond with an error. It's safe to call response.cancel even if no response is in progress, an error will be returned the session will remain unaffected.

type: "response.cancel"

The event type, must be response.cancel.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
response_id?: string

A specific response ID to cancel - if not provided, will cancel an in-progress response in the default conversation.

ResponseContentPartAddedEvent { content_index, event_id, item_id, 4 more }

Returned when a new content part is added to an assistant message item during response generation.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item to which the content part was added.

output_index: number

The index of the output item in the response.

part: Part { audio, text, transcript, type }

The content part that was added.

audio?: string

Base64-encoded audio data (if type is "audio").

text?: string

The text content (if type is "text").

transcript?: string

The transcript of the audio (if type is "audio").

type?: "text" | "audio"

The content type ("text", "audio").

One of the following:
"text"
"audio"
response_id: string

The ID of the response.

type: "response.content_part.added"

The event type, must be response.content_part.added.

ResponseContentPartDoneEvent { content_index, event_id, item_id, 4 more }

Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

part: Part { audio, text, transcript, type }

The content part that is done.

audio?: string

Base64-encoded audio data (if type is "audio").

text?: string

The text content (if type is "text").

transcript?: string

The transcript of the audio (if type is "audio").

type?: "text" | "audio"

The content type ("text", "audio").

One of the following:
"text"
"audio"
response_id: string

The ID of the response.

type: "response.content_part.done"

The event type, must be response.content_part.done.

ResponseCreateEvent { type, event_id, response }

This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.

A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history by default.

The server will respond with a response.created event, events for Items and content created, and finally a response.done event to indicate the Response is complete.

The response.create event includes inference configuration like instructions and tools. If these are set, they will override the Session's configuration for this Response only.

Responses can be created out-of-band of the default Conversation, meaning that they can have arbitrary input, and it's possible to disable writing the output to the Conversation. Only one Response can write to the default Conversation at a time, but otherwise multiple Responses can be created in parallel. The metadata field is a good way to disambiguate multiple simultaneous Responses.

Clients can set conversation to none to create a Response that does not write to the default Conversation. Arbitrary input can be provided with the input field, which is an array accepting raw Items and references to existing Items.

type: "response.create"

The event type, must be response.create.

event_id?: string

Optional client-generated ID used to identify this event.

maxLength512
response?: RealtimeResponseCreateParams { audio, conversation, input, 7 more }

Create a new Realtime response with these parameters

ResponseCreatedEvent { event_id, response, type }

Returned when a new Response is created. The first event of response creation, where the response is in an initial state of in_progress.

event_id: string

The unique ID of the server event.

response: RealtimeResponse { id, audio, conversation_id, 8 more }

The response resource.

type: "response.created"

The event type, must be response.created.

ResponseDoneEvent { event_id, response, type }

Returned when a Response is done streaming. Always emitted, no matter the final state. The Response object included in the response.done event will include all output Items in the Response but will omit the raw audio data.

Clients should check the status field of the Response to determine if it was successful (completed) or if there was another outcome: cancelled, failed, or incomplete.

A response will contain all output items that were generated during the response, excluding any audio content.

event_id: string

The unique ID of the server event.

response: RealtimeResponse { id, audio, conversation_id, 8 more }

The response resource.

type: "response.done"

The event type, must be response.done.

ResponseFunctionCallArgumentsDeltaEvent { call_id, delta, event_id, 4 more }

Returned when the model-generated function call arguments are updated.

call_id: string

The ID of the function call.

delta: string

The arguments delta as a JSON string.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the function call item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.function_call_arguments.delta"

The event type, must be response.function_call_arguments.delta.

ResponseFunctionCallArgumentsDoneEvent { arguments, call_id, event_id, 5 more }

Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

arguments: string

The final arguments as a JSON string.

call_id: string

The ID of the function call.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the function call item.

name: string

The name of the function that was called.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.function_call_arguments.done"

The event type, must be response.function_call_arguments.done.

ResponseMcpCallArgumentsDelta { delta, event_id, item_id, 4 more }

Returned when MCP tool call arguments are updated during response generation.

delta: string

The JSON-encoded arguments delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.mcp_call_arguments.delta"

The event type, must be response.mcp_call_arguments.delta.

obfuscation?: string | null

If present, indicates the delta text was obfuscated.

ResponseMcpCallArgumentsDone { arguments, event_id, item_id, 3 more }

Returned when MCP tool call arguments are finalized during response generation.

arguments: string

The final JSON-encoded arguments string.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.mcp_call_arguments.done"

The event type, must be response.mcp_call_arguments.done.

ResponseMcpCallCompleted { event_id, item_id, output_index, type }

Returned when an MCP tool call has completed successfully.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

type: "response.mcp_call.completed"

The event type, must be response.mcp_call.completed.

ResponseMcpCallFailed { event_id, item_id, output_index, type }

Returned when an MCP tool call has failed.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

type: "response.mcp_call.failed"

The event type, must be response.mcp_call.failed.

ResponseMcpCallInProgress { event_id, item_id, output_index, type }

Returned when an MCP tool call has started and is in progress.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the MCP tool call item.

output_index: number

The index of the output item in the response.

type: "response.mcp_call.in_progress"

The event type, must be response.mcp_call.in_progress.

ResponseOutputItemAddedEvent { event_id, item, output_index, 2 more }

Returned when a new Item is created during Response generation.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

output_index: number

The index of the output item in the Response.

response_id: string

The ID of the Response to which the item belongs.

type: "response.output_item.added"

The event type, must be response.output_item.added.

ResponseOutputItemDoneEvent { event_id, item, output_index, 2 more }

Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

event_id: string

The unique ID of the server event.

A single item within a Realtime conversation.

output_index: number

The index of the output item in the Response.

response_id: string

The ID of the Response to which the item belongs.

type: "response.output_item.done"

The event type, must be response.output_item.done.

ResponseTextDeltaEvent { content_index, delta, event_id, 4 more }

Returned when the text value of an "output_text" content part is updated.

content_index: number

The index of the content part in the item's content array.

delta: string

The text delta.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

type: "response.output_text.delta"

The event type, must be response.output_text.delta.

ResponseTextDoneEvent { content_index, event_id, item_id, 4 more }

Returned when the text value of an "output_text" content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.

content_index: number

The index of the content part in the item's content array.

event_id: string

The unique ID of the server event.

item_id: string

The ID of the item.

output_index: number

The index of the output item in the response.

response_id: string

The ID of the response.

text: string

The final text content.

type: "response.output_text.done"

The event type, must be response.output_text.done.

SessionCreatedEvent { event_id, session, type }

Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.

event_id: string

The unique ID of the server event.

session: RealtimeSessionCreateRequest { type, audio, include, 9 more } | RealtimeTranscriptionSessionCreateRequest { type, audio, include }

The session configuration.

One of the following:
RealtimeSessionCreateRequest { type, audio, include, 9 more }

Realtime session object configuration.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: RealtimeAudioConfig { input, output }

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: RealtimeToolsConfig { , }

Tools available to the model.

tracing?: RealtimeTracingConfig | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Realtime transcription session object configuration.

type: "transcription"

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

type: "session.created"

The event type, must be session.created.

SessionUpdateEvent { session, type, event_id }

Send this event to update the session’s configuration. The client may send this event at any time to update any field except for voice and model. voice can be updated only if there have been no other audio outputs yet.

When the server receives a session.update, it will respond with a session.updated event showing the full, effective configuration. Only the fields that are present in the session.update are updated. To clear a field like instructions, pass an empty string. To clear a field like tools, pass an empty array. To clear a field like turn_detection, pass null.

session: RealtimeSessionCreateRequest { type, audio, include, 9 more } | RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Update the Realtime session. Choose either a realtime session or a transcription session.

One of the following:
RealtimeSessionCreateRequest { type, audio, include, 9 more }

Realtime session object configuration.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: RealtimeAudioConfig { input, output }

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: RealtimeToolsConfig { , }

Tools available to the model.

tracing?: RealtimeTracingConfig | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Realtime transcription session object configuration.

type: "transcription"

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

type: "session.update"

The event type, must be session.update.

event_id?: string

Optional client-generated ID used to identify this event. This is an arbitrary string that a client may assign. It will be passed back if there is an error with the event, but the corresponding session.updated event will not include it.

maxLength512
SessionUpdatedEvent { event_id, session, type }

Returned when a session is updated with a session.update event, unless there is an error.

event_id: string

The unique ID of the server event.

session: RealtimeSessionCreateRequest { type, audio, include, 9 more } | RealtimeTranscriptionSessionCreateRequest { type, audio, include }

The session configuration.

One of the following:
RealtimeSessionCreateRequest { type, audio, include, 9 more }

Realtime session object configuration.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: RealtimeAudioConfig { input, output }

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: RealtimeToolsConfig { , }

Tools available to the model.

tracing?: RealtimeTracingConfig | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeTranscriptionSessionCreateRequest { type, audio, include }

Realtime transcription session object configuration.

type: "transcription"

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

type: "session.updated"

The event type, must be session.updated.

TranscriptionSessionUpdate { session, type, event_id }

Send this event to update a transcription session.

session: Session { include, input_audio_format, input_audio_noise_reduction, 2 more }

Realtime transcription session object configuration.

include?: Array<"item.input_audio_transcription.logprobs">

The set of items to include in the transcription. Current available items are: item.input_audio_transcription.logprobs

input_audio_format?: "pcm16" | "g711_ulaw" | "g711_alaw"

The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.

One of the following:
"pcm16"
"g711_ulaw"
"g711_alaw"
input_audio_noise_reduction?: InputAudioNoiseReduction { type }

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

input_audio_transcription?: AudioTranscription { language, model, prompt }

Configuration for input audio transcription. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

turn_detection?: TurnDetection { prefix_padding_ms, silence_duration_ms, threshold, type }

Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

prefix_padding_ms?: number

Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: number

Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: number

Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

type?: "server_vad"

Type of turn detection. Only server_vad is currently supported for transcription sessions.

type: "transcription_session.update"

The event type, must be transcription_session.update.

event_id?: string

Optional client-generated ID used to identify this event.

TranscriptionSessionUpdatedEvent { event_id, session, type }

Returned when a transcription session is updated with a transcription_session.update event, unless there is an error.

event_id: string

The unique ID of the server event.

session: Session { client_secret, input_audio_format, input_audio_transcription, 2 more }

A new Realtime transcription session configuration.

When a session is created on the server via REST API, the session object also contains an ephemeral key. Default TTL for keys is 10 minutes. This property is not present when a session is updated via the WebSocket API.

client_secret: ClientSecret { expires_at, value }

Ephemeral key returned by the API. Only present when the session is created on the server via REST API.

expires_at: number

Timestamp for when the token expires. Currently, all tokens expire after one minute.

value: string

Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.

input_audio_format?: string

The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.

input_audio_transcription?: AudioTranscription { language, model, prompt }

Configuration of the transcription model.

modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. To disable audio, set this to ["text"].

One of the following:
"text"
"audio"
turn_detection?: TurnDetection { prefix_padding_ms, silence_duration_ms, threshold, type }

Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

prefix_padding_ms?: number

Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: number

Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: number

Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

type?: string

Type of turn detection, only server_vad is currently supported.

type: "transcription_session.updated"

The event type, must be transcription_session.updated.

RealtimeClient Secrets

Create client secret
client.realtime.clientSecrets.create(ClientSecretCreateParams { expires_after, session } body, RequestOptionsoptions?): ClientSecretCreateResponse { expires_at, session, value }
POST/realtime/client_secrets
ModelsExpand Collapse
RealtimeSessionClientSecret { expires_at, value }

Ephemeral key returned by the API.

expires_at: number

Timestamp for when the token expires. Currently, all tokens expire after one minute.

value: string

Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.

RealtimeSessionCreateResponse { client_secret, type, audio, 10 more }

A new Realtime session configuration, with an ephemeral key. Default TTL for keys is one minute.

client_secret: RealtimeSessionClientSecret { expires_at, value }

Ephemeral key returned by the API.

type: "realtime"

The type of session to create. Always realtime for the Realtime API.

audio?: Audio { input, output }

Configuration for input and output audio.

input?: Input { format, noise_reduction, transcription, turn_detection }

The format of the input audio.

noise_reduction?: NoiseReduction { type }

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

transcription?: AudioTranscription { language, model, prompt }

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

turn_detection?: ServerVad { type, create_response, idle_timeout_ms, 4 more } | SemanticVad { type, create_response, eagerness, interrupt_response } | null

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

One of the following:
ServerVad { type, create_response, idle_timeout_ms, 4 more }

Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.

type: "server_vad"

Type of turn detection, server_vad to turn on simple Server VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

idle_timeout_ms?: number | null

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response's audio has finished playing, i.e. it's set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

minimum5000
maximum30000
interrupt_response?: boolean

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

prefix_padding_ms?: number

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: number

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: number

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

SemanticVad { type, create_response, eagerness, interrupt_response }

Server-side semantic turn detection which uses a model to determine when the user has finished speaking.

type: "semantic_vad"

Type of turn detection, semantic_vad to turn on Semantic VAD.

create_response?: boolean

Whether or not to automatically generate a response when a VAD stop event occurs.

eagerness?: "low" | "medium" | "high" | "auto"

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

One of the following:
"low"
"medium"
"high"
"auto"
interrupt_response?: boolean

Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

output?: Output { format, speed, voice }

The format of the output audio.

speed?: number

The speed of the model's spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

This parameter is a post-processing adjustment to the audio after it is generated, it's also possible to prompt the model to speak faster or slower.

maximum1.5
minimum0.25
voice?: (string & {}) | "alloy" | "ash" | "ballad" | 7 more

The voice the model uses to respond. Voice cannot be changed during the session once the model has responded with audio at least once. Current voice options are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. We recommend marin and cedar for best quality.

One of the following:
(string & {})
"alloy" | "ash" | "ballad" | 7 more
"alloy"
"ash"
"ballad"
"coral"
"echo"
"sage"
"shimmer"
"verse"
"marin"
"cedar"
include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: string

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: number | "inf"

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

One of the following:
number
"inf"
"inf"
model?: (string & {}) | "gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more

The Realtime model used for this session.

One of the following:
(string & {})
"gpt-realtime" | "gpt-realtime-1.5" | "gpt-realtime-2025-08-28" | 13 more
"gpt-realtime"
"gpt-realtime-1.5"
"gpt-realtime-2025-08-28"
"gpt-4o-realtime-preview"
"gpt-4o-realtime-preview-2024-10-01"
"gpt-4o-realtime-preview-2024-12-17"
"gpt-4o-realtime-preview-2025-06-03"
"gpt-4o-mini-realtime-preview"
"gpt-4o-mini-realtime-preview-2024-12-17"
"gpt-realtime-mini"
"gpt-realtime-mini-2025-10-06"
"gpt-realtime-mini-2025-12-15"
"gpt-audio-1.5"
"gpt-audio-mini"
"gpt-audio-mini-2025-10-06"
"gpt-audio-mini-2025-12-15"
output_modalities?: Array<"text" | "audio">

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

One of the following:
"text"
"audio"
prompt?: ResponsePrompt { id, variables, version } | null

Reference to a prompt template and its variables. Learn more.

tool_choice?: ToolChoiceOptions | ToolChoiceFunction { name, type } | ToolChoiceMcp { server_label, type, name }

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

One of the following:
ToolChoiceOptions = "none" | "auto" | "required"

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

One of the following:
"none"
"auto"
"required"
ToolChoiceFunction { name, type }

Use this option to force the model to call a specific function.

name: string

The name of the function to call.

type: "function"

For function calling, the type is always function.

ToolChoiceMcp { server_label, type, name }

Use this option to force the model to call a specific tool on a remote MCP server.

server_label: string

The label of the MCP server to use.

type: "mcp"

For MCP tools, the type is always mcp.

name?: string | null

The name of the tool to call on the server.

tools?: Array<RealtimeFunctionTool { description, name, parameters, type } | McpTool { server_label, type, allowed_tools, 7 more } >

Tools available to the model.

One of the following:
RealtimeFunctionTool { description, name, parameters, type }
description?: string

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: string

The name of the function.

parameters?: unknown

Parameters of the function in JSON Schema.

type?: "function"

The type of the tool, i.e. function.

McpTool { server_label, type, allowed_tools, 7 more }

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: string

A label for this MCP server, used to identify it in tool calls.

type: "mcp"

The type of the MCP tool. Always mcp.

allowed_tools?: Array<string> | McpToolFilter { read_only, tool_names } | null

List of allowed tool names or a filter object.

One of the following:
Array<string>
McpToolFilter { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

authorization?: string

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: "connector_dropbox" | "connector_gmail" | "connector_googlecalendar" | 5 more

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
One of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
defer_loading?: boolean

Whether this MCP tool is deferred and discovered via tool search.

headers?: Record<string, string> | null

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: McpToolApprovalFilter { always, never } | "always" | "never" | null

Specify which of the MCP server's tools require approval.

One of the following:
McpToolApprovalFilter { always, never }

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always?: Always { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

never?: Never { read_only, tool_names }

A filter object to specify which tools are allowed.

read_only?: boolean

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: Array<string>

List of allowed tool names.

"always" | "never"
"always"
"never"
server_description?: string

Optional description of the MCP server, used to provide more context.

server_url?: string

The URL for the MCP server. One of server_url or connector_id must be provided.

tracing?: "auto" | TracingConfiguration { group_id, metadata, workflow_name } | null

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

One of the following:
"auto"
"auto"
TracingConfiguration { group_id, metadata, workflow_name }

Granular configuration for tracing.

group_id?: string

The group id to attach to this trace to enable filtering and grouping in the Traces Dashboard.

metadata?: unknown

The arbitrary metadata to attach to this trace to enable filtering in the Traces Dashboard.

workflow_name?: string

The name of the workflow to attach to this trace. This is used to name the trace in the Traces Dashboard.

truncation?: RealtimeTruncation

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

RealtimeTranscriptionSessionCreateResponse { id, object, type, 3 more }

A Realtime transcription session configuration object.

id: string

Unique identifier for the session that looks like sess_1234567890abcdef.

object: string

The object type. Always realtime.transcription_session.

type: "transcription"

The type of session. Always transcription for transcription sessions.

audio?: Audio { input }

Configuration for input audio for the session.

input?: Input { format, noise_reduction, transcription, turn_detection }

The PCM audio format. Only a 24kHz sample rate is supported.

noise_reduction?: NoiseReduction { type }

Configuration for input audio noise reduction.

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

transcription?: AudioTranscription { language, model, prompt }

Configuration of the transcription model.

turn_detection?: RealtimeTranscriptionSessionTurnDetection { prefix_padding_ms, silence_duration_ms, threshold, type }

Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

expires_at?: number

Expiration timestamp for the session, in seconds since epoch.

include?: Array<"item.input_audio_transcription.logprobs">

Additional fields to include in server outputs.

  • item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
RealtimeTranscriptionSessionTurnDetection { prefix_padding_ms, silence_duration_ms, threshold, type }

Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

prefix_padding_ms?: number

Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: number

Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: number

Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

type?: string

Type of turn detection, only server_vad is currently supported.

RealtimeCalls

Accept call
client.realtime.calls.accept(stringcallID, CallAcceptParams { type, audio, include, 9 more } body, RequestOptionsoptions?): void
POST/realtime/calls/{call_id}/accept
Hang up call
client.realtime.calls.hangup(stringcallID, RequestOptionsoptions?): void
POST/realtime/calls/{call_id}/hangup
Refer call
client.realtime.calls.refer(stringcallID, CallReferParams { target_uri } body, RequestOptionsoptions?): void
POST/realtime/calls/{call_id}/refer
Reject call
client.realtime.calls.reject(stringcallID, CallRejectParams { status_code } body?, RequestOptionsoptions?): void
POST/realtime/calls/{call_id}/reject

RealtimeSessions

RealtimeTranscription Sessions