Skip to content

Create client secret

client.Realtime.ClientSecrets.New(ctx, body) (*ClientSecretNewResponse, error)
POST/realtime/client_secrets

Create a Realtime client secret with an associated session configuration.

ParametersExpand Collapse
body ClientSecretNewParams
ExpiresAfter param.Field[ClientSecretNewParamsExpiresAfter]optional

Configuration for the client secret expiration. Expiration refers to the time after which a client secret will no longer be valid for creating sessions. The session itself may continue after that time once started. A secret can be used to create multiple sessions until it expires.

Anchor stringoptional

The anchor point for the client secret expiration, meaning that seconds will be added to the created_at time of the client secret to produce an expiration timestamp. Only created_at is currently supported.

Seconds int64optional

The number of seconds from the anchor point to the expiration. Select a value between 10 and 7200 (2 hours). This default to 600 seconds (10 minutes) if not specified.

minimum10
maximum7200
Session param.Field[ClientSecretNewParamsSessionUnion]optional

Session configuration to use for the client secret. Choose either a realtime session or a transcription session.

type RealtimeSessionCreateRequest struct{…}

Realtime session object configuration.

Type Realtime

The type of session to create. Always realtime for the Realtime API.

Audio RealtimeAudioConfigoptional

Configuration for input and output audio.

The format of the input audio.

Accepts one of the following:
type RealtimeAudioFormatsAudioPCM struct{…}

The PCM audio format. Only a 24kHz sample rate is supported.

Rate int64optional

The sample rate of the audio. Always 24000.

Type stringoptional

The audio format. Always audio/pcm.

type RealtimeAudioFormatsAudioPCMU struct{…}

The G.711 μ-law format.

Type stringoptional

The audio format. Always audio/pcmu.

type RealtimeAudioFormatsAudioPCMA struct{…}

The G.711 A-law format.

Type stringoptional

The audio format. Always audio/pcma.

NoiseReduction RealtimeAudioConfigInputNoiseReductionoptional

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type NoiseReductionTypeoptional

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

Accepts one of the following:
const NoiseReductionTypeNearField NoiseReductionType = "near_field"
const NoiseReductionTypeFarField NoiseReductionType = "far_field"
Transcription AudioTranscriptionoptional

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

Language stringoptional

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

Model AudioTranscriptionModeloptional

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
string
type AudioTranscriptionModel string

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
const AudioTranscriptionModelWhisper1 AudioTranscriptionModel = "whisper-1"
const AudioTranscriptionModelGPT4oMiniTranscribe AudioTranscriptionModel = "gpt-4o-mini-transcribe"
const AudioTranscriptionModelGPT4oMiniTranscribe2025_12_15 AudioTranscriptionModel = "gpt-4o-mini-transcribe-2025-12-15"
const AudioTranscriptionModelGPT4oTranscribe AudioTranscriptionModel = "gpt-4o-transcribe"
const AudioTranscriptionModelGPT4oTranscribeDiarize AudioTranscriptionModel = "gpt-4o-transcribe-diarize"
Prompt stringoptional

An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology".

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

Accepts one of the following:
RealtimeAudioInputTurnDetectionServerVad
Type ServerVad

Type of turn detection, server_vad to turn on simple Server VAD.

CreateResponse booloptional

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

IdleTimeoutMs int64optional

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response's audio has finished playing, i.e. it's set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

minimum5000
maximum30000
InterruptResponse booloptional

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

PrefixPaddingMs int64optional

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

SilenceDurationMs int64optional

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

Threshold float64optional

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

RealtimeAudioInputTurnDetectionSemanticVad
Type SemanticVad

Type of turn detection, semantic_vad to turn on Semantic VAD.

CreateResponse booloptional

Whether or not to automatically generate a response when a VAD stop event occurs.

Eagerness stringoptional

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

Accepts one of the following:
const RealtimeAudioInputTurnDetectionSemanticVadEagernessLow RealtimeAudioInputTurnDetectionSemanticVadEagerness = "low"
const RealtimeAudioInputTurnDetectionSemanticVadEagernessMedium RealtimeAudioInputTurnDetectionSemanticVadEagerness = "medium"
const RealtimeAudioInputTurnDetectionSemanticVadEagernessHigh RealtimeAudioInputTurnDetectionSemanticVadEagerness = "high"
const RealtimeAudioInputTurnDetectionSemanticVadEagernessAuto RealtimeAudioInputTurnDetectionSemanticVadEagerness = "auto"
InterruptResponse booloptional

Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

The format of the output audio.

Accepts one of the following:
type RealtimeAudioFormatsAudioPCM struct{…}

The PCM audio format. Only a 24kHz sample rate is supported.

Rate int64optional

The sample rate of the audio. Always 24000.

Type stringoptional

The audio format. Always audio/pcm.

type RealtimeAudioFormatsAudioPCMU struct{…}

The G.711 μ-law format.

Type stringoptional

The audio format. Always audio/pcmu.

type RealtimeAudioFormatsAudioPCMA struct{…}

The G.711 A-law format.

Type stringoptional

The audio format. Always audio/pcma.

Speed float64optional

The speed of the model's spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

This parameter is a post-processing adjustment to the audio after it is generated, it's also possible to prompt the model to speak faster or slower.

maximum1.5
minimum0.25
Voice RealtimeAudioConfigOutputVoiceoptional

The voice the model uses to respond. Supported built-in voices are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. Voice cannot be changed during the session once the model has responded with audio at least once. We recommend marin and cedar for best quality.

Accepts one of the following:
string
RealtimeAudioConfigOutputVoice
Accepts one of the following:
const RealtimeAudioConfigOutputVoiceAlloy RealtimeAudioConfigOutputVoice = "alloy"
const RealtimeAudioConfigOutputVoiceAsh RealtimeAudioConfigOutputVoice = "ash"
const RealtimeAudioConfigOutputVoiceBallad RealtimeAudioConfigOutputVoice = "ballad"
const RealtimeAudioConfigOutputVoiceCoral RealtimeAudioConfigOutputVoice = "coral"
const RealtimeAudioConfigOutputVoiceEcho RealtimeAudioConfigOutputVoice = "echo"
const RealtimeAudioConfigOutputVoiceSage RealtimeAudioConfigOutputVoice = "sage"
const RealtimeAudioConfigOutputVoiceShimmer RealtimeAudioConfigOutputVoice = "shimmer"
const RealtimeAudioConfigOutputVoiceVerse RealtimeAudioConfigOutputVoice = "verse"
const RealtimeAudioConfigOutputVoiceMarin RealtimeAudioConfigOutputVoice = "marin"
const RealtimeAudioConfigOutputVoiceCedar RealtimeAudioConfigOutputVoice = "cedar"
Include []stringoptional

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

Instructions stringoptional

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

MaxOutputTokens RealtimeSessionCreateRequestMaxOutputTokensUnionoptional

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

Accepts one of the following:
int64
Inf
Model RealtimeSessionCreateRequestModeloptional

The Realtime model used for this session.

Accepts one of the following:
string
RealtimeSessionCreateRequestModel
Accepts one of the following:
const RealtimeSessionCreateRequestModelGPTRealtime RealtimeSessionCreateRequestModel = "gpt-realtime"
const RealtimeSessionCreateRequestModelGPTRealtime2025_08_28 RealtimeSessionCreateRequestModel = "gpt-realtime-2025-08-28"
const RealtimeSessionCreateRequestModelGPT4oRealtimePreview RealtimeSessionCreateRequestModel = "gpt-4o-realtime-preview"
const RealtimeSessionCreateRequestModelGPT4oRealtimePreview2024_10_01 RealtimeSessionCreateRequestModel = "gpt-4o-realtime-preview-2024-10-01"
const RealtimeSessionCreateRequestModelGPT4oRealtimePreview2024_12_17 RealtimeSessionCreateRequestModel = "gpt-4o-realtime-preview-2024-12-17"
const RealtimeSessionCreateRequestModelGPT4oRealtimePreview2025_06_03 RealtimeSessionCreateRequestModel = "gpt-4o-realtime-preview-2025-06-03"
const RealtimeSessionCreateRequestModelGPT4oMiniRealtimePreview RealtimeSessionCreateRequestModel = "gpt-4o-mini-realtime-preview"
const RealtimeSessionCreateRequestModelGPT4oMiniRealtimePreview2024_12_17 RealtimeSessionCreateRequestModel = "gpt-4o-mini-realtime-preview-2024-12-17"
const RealtimeSessionCreateRequestModelGPTRealtimeMini RealtimeSessionCreateRequestModel = "gpt-realtime-mini"
const RealtimeSessionCreateRequestModelGPTRealtimeMini2025_10_06 RealtimeSessionCreateRequestModel = "gpt-realtime-mini-2025-10-06"
const RealtimeSessionCreateRequestModelGPTRealtimeMini2025_12_15 RealtimeSessionCreateRequestModel = "gpt-realtime-mini-2025-12-15"
const RealtimeSessionCreateRequestModelGPTAudioMini RealtimeSessionCreateRequestModel = "gpt-audio-mini"
const RealtimeSessionCreateRequestModelGPTAudioMini2025_10_06 RealtimeSessionCreateRequestModel = "gpt-audio-mini-2025-10-06"
const RealtimeSessionCreateRequestModelGPTAudioMini2025_12_15 RealtimeSessionCreateRequestModel = "gpt-audio-mini-2025-12-15"
OutputModalities []stringoptional

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

Accepts one of the following:
const RealtimeSessionCreateRequestOutputModalityText RealtimeSessionCreateRequestOutputModality = "text"
const RealtimeSessionCreateRequestOutputModalityAudio RealtimeSessionCreateRequestOutputModality = "audio"
Prompt ResponsePromptoptional

Reference to a prompt template and its variables. Learn more.

ID string

The unique identifier of the prompt template to use.

Variables map[string, ResponsePromptVariableUnion]optional

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
type ResponseInputText struct{…}

A text input to the model.

Text string

The text input to the model.

Type InputText

The type of the input item. Always input_text.

type ResponseInputImage struct{…}

An image input to the model. Learn about image inputs.

Detail ResponseInputImageDetail

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
const ResponseInputImageDetailLow ResponseInputImageDetail = "low"
const ResponseInputImageDetailHigh ResponseInputImageDetail = "high"
const ResponseInputImageDetailAuto ResponseInputImageDetail = "auto"
Type InputImage

The type of the input item. Always input_image.

FileID stringoptional

The ID of the file to be sent to the model.

ImageURL stringoptional

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

type ResponseInputFile struct{…}

A file input to the model.

Type InputFile

The type of the input item. Always input_file.

FileData stringoptional

The content of the file to be sent to the model.

FileID stringoptional

The ID of the file to be sent to the model.

FileURL stringoptional

The URL of the file to be sent to the model.

Filename stringoptional

The name of the file to be sent to the model.

Version stringoptional

Optional version of the prompt template.

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

Accepts one of the following:
type ToolChoiceOptions string

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
const ToolChoiceOptionsNone ToolChoiceOptions = "none"
const ToolChoiceOptionsAuto ToolChoiceOptions = "auto"
const ToolChoiceOptionsRequired ToolChoiceOptions = "required"
type ToolChoiceFunction struct{…}

Use this option to force the model to call a specific function.

Name string

The name of the function to call.

Type Function

For function calling, the type is always function.

type ToolChoiceMcp struct{…}

Use this option to force the model to call a specific tool on a remote MCP server.

ServerLabel string

The label of the MCP server to use.

Type Mcp

For MCP tools, the type is always mcp.

Name stringoptional

The name of the tool to call on the server.

Tools RealtimeToolsConfigoptional

Tools available to the model.

Accepts one of the following:
type RealtimeFunctionTool struct{…}
Description stringoptional

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

Name stringoptional

The name of the function.

Parameters anyoptional

Parameters of the function in JSON Schema.

Type RealtimeFunctionToolTypeoptional

The type of the tool, i.e. function.

RealtimeToolsConfigUnionMcp
ServerLabel string

A label for this MCP server, used to identify it in tool calls.

Type Mcp

The type of the MCP tool. Always mcp.

AllowedTools RealtimeToolsConfigUnionMcpAllowedToolsoptional

List of allowed tool names or a filter object.

Accepts one of the following:
[]string
RealtimeToolsConfigUnionMcpAllowedToolsMcpToolFilter
ReadOnly booloptional

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

ToolNames []stringoptional

List of allowed tool names.

Authorization stringoptional

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

ConnectorID stringoptional

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
const RealtimeToolsConfigUnionMcpConnectorIDConnectorDropbox RealtimeToolsConfigUnionMcpConnectorID = "connector_dropbox"
const RealtimeToolsConfigUnionMcpConnectorIDConnectorGmail RealtimeToolsConfigUnionMcpConnectorID = "connector_gmail"
const RealtimeToolsConfigUnionMcpConnectorIDConnectorGooglecalendar RealtimeToolsConfigUnionMcpConnectorID = "connector_googlecalendar"
const RealtimeToolsConfigUnionMcpConnectorIDConnectorGoogledrive RealtimeToolsConfigUnionMcpConnectorID = "connector_googledrive"
const RealtimeToolsConfigUnionMcpConnectorIDConnectorMicrosoftteams RealtimeToolsConfigUnionMcpConnectorID = "connector_microsoftteams"
const RealtimeToolsConfigUnionMcpConnectorIDConnectorOutlookcalendar RealtimeToolsConfigUnionMcpConnectorID = "connector_outlookcalendar"
const RealtimeToolsConfigUnionMcpConnectorIDConnectorOutlookemail RealtimeToolsConfigUnionMcpConnectorID = "connector_outlookemail"
const RealtimeToolsConfigUnionMcpConnectorIDConnectorSharepoint RealtimeToolsConfigUnionMcpConnectorID = "connector_sharepoint"
Headers map[string, string]optional

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

RequireApproval RealtimeToolsConfigUnionMcpRequireApprovaloptional

Specify which of the MCP server's tools require approval.

Accepts one of the following:
RealtimeToolsConfigUnionMcpRequireApprovalMcpToolApprovalFilter
Always RealtimeToolsConfigUnionMcpRequireApprovalMcpToolApprovalFilterAlwaysoptional

A filter object to specify which tools are allowed.

ReadOnly booloptional

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

ToolNames []stringoptional

List of allowed tool names.

Never RealtimeToolsConfigUnionMcpRequireApprovalMcpToolApprovalFilterNeveroptional

A filter object to specify which tools are allowed.

ReadOnly booloptional

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

ToolNames []stringoptional

List of allowed tool names.

string
Accepts one of the following:
const RealtimeToolsConfigUnionMcpRequireApprovalMcpToolApprovalSettingAlways RealtimeToolsConfigUnionMcpRequireApprovalMcpToolApprovalSetting = "always"
const RealtimeToolsConfigUnionMcpRequireApprovalMcpToolApprovalSettingNever RealtimeToolsConfigUnionMcpRequireApprovalMcpToolApprovalSetting = "never"
ServerDescription stringoptional

Optional description of the MCP server, used to provide more context.

ServerURL stringoptional

The URL for the MCP server. One of server_url or connector_id must be provided.

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

Accepts one of the following:
Auto
RealtimeTracingConfigTracingConfiguration
GroupID stringoptional

The group id to attach to this trace to enable filtering and grouping in the Traces Dashboard.

Metadata anyoptional

The arbitrary metadata to attach to this trace to enable filtering in the Traces Dashboard.

WorkflowName stringoptional

The name of the workflow to attach to this trace. This is used to name the trace in the Traces Dashboard.

Truncation RealtimeTruncationUnionoptional

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

Accepts one of the following:
type RealtimeTruncationRealtimeTruncationStrategy string

The truncation strategy to use for the session. auto is the default truncation strategy. disabled will disable truncation and emit errors when the conversation exceeds the input token limit.

Accepts one of the following:
const RealtimeTruncationRealtimeTruncationStrategyAuto RealtimeTruncationRealtimeTruncationStrategy = "auto"
const RealtimeTruncationRealtimeTruncationStrategyDisabled RealtimeTruncationRealtimeTruncationStrategy = "disabled"
type RealtimeTruncationRetentionRatio struct{…}

Retain a fraction of the conversation tokens when the conversation exceeds the input token limit. This allows you to amortize truncations across multiple turns, which can help improve cached token usage.

RetentionRatio float64

Fraction of post-instruction conversation tokens to retain (0.0 - 1.0) when the conversation exceeds the input token limit. Setting this to 0.8 means that messages will be dropped until 80% of the maximum allowed tokens are used. This helps reduce the frequency of truncations and improve cache rates.

minimum0
maximum1
Type RetentionRatio

Use retention ratio truncation.

TokenLimits RealtimeTruncationRetentionRatioTokenLimitsoptional

Optional custom token limits for this truncation strategy. If not provided, the model's default token limits will be used.

PostInstructions int64optional

Maximum tokens allowed in the conversation after instructions (which including tool definitions). For example, setting this to 5,000 would mean that truncation would occur when the conversation exceeds 5,000 tokens after instructions. This cannot be higher than the model's context window size minus the maximum output tokens.

minimum0
type RealtimeTranscriptionSessionCreateRequest struct{…}

Realtime transcription session object configuration.

Type Transcription

The type of session to create. Always transcription for transcription sessions.

Configuration for input and output audio.

The PCM audio format. Only a 24kHz sample rate is supported.

Accepts one of the following:
type RealtimeAudioFormatsAudioPCM struct{…}

The PCM audio format. Only a 24kHz sample rate is supported.

Rate int64optional

The sample rate of the audio. Always 24000.

Type stringoptional

The audio format. Always audio/pcm.

type RealtimeAudioFormatsAudioPCMU struct{…}

The G.711 μ-law format.

Type stringoptional

The audio format. Always audio/pcmu.

type RealtimeAudioFormatsAudioPCMA struct{…}

The G.711 A-law format.

Type stringoptional

The audio format. Always audio/pcma.

NoiseReduction RealtimeTranscriptionSessionAudioInputNoiseReductionoptional

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type NoiseReductionTypeoptional

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

Accepts one of the following:
const NoiseReductionTypeNearField NoiseReductionType = "near_field"
const NoiseReductionTypeFarField NoiseReductionType = "far_field"
Transcription AudioTranscriptionoptional

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

Language stringoptional

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

Model AudioTranscriptionModeloptional

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
string
type AudioTranscriptionModel string

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
const AudioTranscriptionModelWhisper1 AudioTranscriptionModel = "whisper-1"
const AudioTranscriptionModelGPT4oMiniTranscribe AudioTranscriptionModel = "gpt-4o-mini-transcribe"
const AudioTranscriptionModelGPT4oMiniTranscribe2025_12_15 AudioTranscriptionModel = "gpt-4o-mini-transcribe-2025-12-15"
const AudioTranscriptionModelGPT4oTranscribe AudioTranscriptionModel = "gpt-4o-transcribe"
const AudioTranscriptionModelGPT4oTranscribeDiarize AudioTranscriptionModel = "gpt-4o-transcribe-diarize"
Prompt stringoptional

An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology".

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

Accepts one of the following:
RealtimeTranscriptionSessionAudioInputTurnDetectionServerVad
Type ServerVad

Type of turn detection, server_vad to turn on simple Server VAD.

CreateResponse booloptional

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

IdleTimeoutMs int64optional

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response's audio has finished playing, i.e. it's set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

minimum5000
maximum30000
InterruptResponse booloptional

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

PrefixPaddingMs int64optional

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

SilenceDurationMs int64optional

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

Threshold float64optional

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVad
Type SemanticVad

Type of turn detection, semantic_vad to turn on Semantic VAD.

CreateResponse booloptional

Whether or not to automatically generate a response when a VAD stop event occurs.

Eagerness stringoptional

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

Accepts one of the following:
const RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagernessLow RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagerness = "low"
const RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagernessMedium RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagerness = "medium"
const RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagernessHigh RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagerness = "high"
const RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagernessAuto RealtimeTranscriptionSessionAudioInputTurnDetectionSemanticVadEagerness = "auto"
InterruptResponse booloptional

Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

Include []stringoptional

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

ReturnsExpand Collapse
type ClientSecretNewResponse struct{…}

Response from creating a session and client secret for the Realtime API.

ExpiresAt int64

Expiration timestamp for the client secret, in seconds since epoch.

Session ClientSecretNewResponseSessionUnion

The session configuration for either a realtime or transcription session.

Accepts one of the following:
type RealtimeSessionCreateResponse struct{…}

A new Realtime session configuration, with an ephemeral key. Default TTL for keys is one minute.

Ephemeral key returned by the API.

ExpiresAt int64

Timestamp for when the token expires. Currently, all tokens expire after one minute.

Value string

Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.

Type Realtime

The type of session to create. Always realtime for the Realtime API.

Audio RealtimeSessionCreateResponseAudiooptional

Configuration for input and output audio.

Input RealtimeSessionCreateResponseAudioInputoptional

The format of the input audio.

Accepts one of the following:
type RealtimeAudioFormatsAudioPCM struct{…}

The PCM audio format. Only a 24kHz sample rate is supported.

Rate int64optional

The sample rate of the audio. Always 24000.

Type stringoptional

The audio format. Always audio/pcm.

type RealtimeAudioFormatsAudioPCMU struct{…}

The G.711 μ-law format.

Type stringoptional

The audio format. Always audio/pcmu.

type RealtimeAudioFormatsAudioPCMA struct{…}

The G.711 A-law format.

Type stringoptional

The audio format. Always audio/pcma.

NoiseReduction RealtimeSessionCreateResponseAudioInputNoiseReductionoptional

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

Type NoiseReductionTypeoptional

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

Accepts one of the following:
const NoiseReductionTypeNearField NoiseReductionType = "near_field"
const NoiseReductionTypeFarField NoiseReductionType = "far_field"
Transcription AudioTranscriptionoptional

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

Language stringoptional

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

Model AudioTranscriptionModeloptional

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
string
type AudioTranscriptionModel string

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
const AudioTranscriptionModelWhisper1 AudioTranscriptionModel = "whisper-1"
const AudioTranscriptionModelGPT4oMiniTranscribe AudioTranscriptionModel = "gpt-4o-mini-transcribe"
const AudioTranscriptionModelGPT4oMiniTranscribe2025_12_15 AudioTranscriptionModel = "gpt-4o-mini-transcribe-2025-12-15"
const AudioTranscriptionModelGPT4oTranscribe AudioTranscriptionModel = "gpt-4o-transcribe"
const AudioTranscriptionModelGPT4oTranscribeDiarize AudioTranscriptionModel = "gpt-4o-transcribe-diarize"
Prompt stringoptional

An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology".

TurnDetection RealtimeSessionCreateResponseAudioInputTurnDetectionUnionoptional

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

Accepts one of the following:
type RealtimeSessionCreateResponseAudioInputTurnDetectionServerVad struct{…}

Server-side voice activity detection (VAD) which flips on when user speech is detected and off after a period of silence.

Type ServerVad

Type of turn detection, server_vad to turn on simple Server VAD.

CreateResponse booloptional

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

IdleTimeoutMs int64optional

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response's audio has finished playing, i.e. it's set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

minimum5000
maximum30000
InterruptResponse booloptional

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

PrefixPaddingMs int64optional

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

SilenceDurationMs int64optional

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

Threshold float64optional

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

type RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVad struct{…}

Server-side semantic turn detection which uses a model to determine when the user has finished speaking.

Type SemanticVad

Type of turn detection, semantic_vad to turn on Semantic VAD.

CreateResponse booloptional

Whether or not to automatically generate a response when a VAD stop event occurs.

Eagerness stringoptional

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

Accepts one of the following:
const RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagernessLow RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagerness = "low"
const RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagernessMedium RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagerness = "medium"
const RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagernessHigh RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagerness = "high"
const RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagernessAuto RealtimeSessionCreateResponseAudioInputTurnDetectionSemanticVadEagerness = "auto"
InterruptResponse booloptional

Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

Output RealtimeSessionCreateResponseAudioOutputoptional

The format of the output audio.

Accepts one of the following:
type RealtimeAudioFormatsAudioPCM struct{…}

The PCM audio format. Only a 24kHz sample rate is supported.

Rate int64optional

The sample rate of the audio. Always 24000.

Type stringoptional

The audio format. Always audio/pcm.

type RealtimeAudioFormatsAudioPCMU struct{…}

The G.711 μ-law format.

Type stringoptional

The audio format. Always audio/pcmu.

type RealtimeAudioFormatsAudioPCMA struct{…}

The G.711 A-law format.

Type stringoptional

The audio format. Always audio/pcma.

Speed float64optional

The speed of the model's spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

This parameter is a post-processing adjustment to the audio after it is generated, it's also possible to prompt the model to speak faster or slower.

maximum1.5
minimum0.25
Voice stringoptional

The voice the model uses to respond. Voice cannot be changed during the session once the model has responded with audio at least once. Current voice options are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. We recommend marin and cedar for best quality.

Accepts one of the following:
string
string
Accepts one of the following:
const RealtimeSessionCreateResponseAudioOutputVoiceAlloy RealtimeSessionCreateResponseAudioOutputVoice = "alloy"
const RealtimeSessionCreateResponseAudioOutputVoiceAsh RealtimeSessionCreateResponseAudioOutputVoice = "ash"
const RealtimeSessionCreateResponseAudioOutputVoiceBallad RealtimeSessionCreateResponseAudioOutputVoice = "ballad"
const RealtimeSessionCreateResponseAudioOutputVoiceCoral RealtimeSessionCreateResponseAudioOutputVoice = "coral"
const RealtimeSessionCreateResponseAudioOutputVoiceEcho RealtimeSessionCreateResponseAudioOutputVoice = "echo"
const RealtimeSessionCreateResponseAudioOutputVoiceSage RealtimeSessionCreateResponseAudioOutputVoice = "sage"
const RealtimeSessionCreateResponseAudioOutputVoiceShimmer RealtimeSessionCreateResponseAudioOutputVoice = "shimmer"
const RealtimeSessionCreateResponseAudioOutputVoiceVerse RealtimeSessionCreateResponseAudioOutputVoice = "verse"
const RealtimeSessionCreateResponseAudioOutputVoiceMarin RealtimeSessionCreateResponseAudioOutputVoice = "marin"
const RealtimeSessionCreateResponseAudioOutputVoiceCedar RealtimeSessionCreateResponseAudioOutputVoice = "cedar"
Include []stringoptional

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

Instructions stringoptional

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

MaxOutputTokens RealtimeSessionCreateResponseMaxOutputTokensUnionoptional

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

Accepts one of the following:
int64
type Inf string
Model RealtimeSessionCreateResponseModeloptional

The Realtime model used for this session.

Accepts one of the following:
string
type RealtimeSessionCreateResponseModel string

The Realtime model used for this session.

Accepts one of the following:
const RealtimeSessionCreateResponseModelGPTRealtime RealtimeSessionCreateResponseModel = "gpt-realtime"
const RealtimeSessionCreateResponseModelGPTRealtime2025_08_28 RealtimeSessionCreateResponseModel = "gpt-realtime-2025-08-28"
const RealtimeSessionCreateResponseModelGPT4oRealtimePreview RealtimeSessionCreateResponseModel = "gpt-4o-realtime-preview"
const RealtimeSessionCreateResponseModelGPT4oRealtimePreview2024_10_01 RealtimeSessionCreateResponseModel = "gpt-4o-realtime-preview-2024-10-01"
const RealtimeSessionCreateResponseModelGPT4oRealtimePreview2024_12_17 RealtimeSessionCreateResponseModel = "gpt-4o-realtime-preview-2024-12-17"
const RealtimeSessionCreateResponseModelGPT4oRealtimePreview2025_06_03 RealtimeSessionCreateResponseModel = "gpt-4o-realtime-preview-2025-06-03"
const RealtimeSessionCreateResponseModelGPT4oMiniRealtimePreview RealtimeSessionCreateResponseModel = "gpt-4o-mini-realtime-preview"
const RealtimeSessionCreateResponseModelGPT4oMiniRealtimePreview2024_12_17 RealtimeSessionCreateResponseModel = "gpt-4o-mini-realtime-preview-2024-12-17"
const RealtimeSessionCreateResponseModelGPTRealtimeMini RealtimeSessionCreateResponseModel = "gpt-realtime-mini"
const RealtimeSessionCreateResponseModelGPTRealtimeMini2025_10_06 RealtimeSessionCreateResponseModel = "gpt-realtime-mini-2025-10-06"
const RealtimeSessionCreateResponseModelGPTRealtimeMini2025_12_15 RealtimeSessionCreateResponseModel = "gpt-realtime-mini-2025-12-15"
const RealtimeSessionCreateResponseModelGPTAudioMini RealtimeSessionCreateResponseModel = "gpt-audio-mini"
const RealtimeSessionCreateResponseModelGPTAudioMini2025_10_06 RealtimeSessionCreateResponseModel = "gpt-audio-mini-2025-10-06"
const RealtimeSessionCreateResponseModelGPTAudioMini2025_12_15 RealtimeSessionCreateResponseModel = "gpt-audio-mini-2025-12-15"
OutputModalities []stringoptional

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

Accepts one of the following:
const RealtimeSessionCreateResponseOutputModalityText RealtimeSessionCreateResponseOutputModality = "text"
const RealtimeSessionCreateResponseOutputModalityAudio RealtimeSessionCreateResponseOutputModality = "audio"
Prompt ResponsePromptoptional

Reference to a prompt template and its variables. Learn more.

ID string

The unique identifier of the prompt template to use.

Variables map[string, ResponsePromptVariableUnion]optional

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

Accepts one of the following:
string
type ResponseInputText struct{…}

A text input to the model.

Text string

The text input to the model.

Type InputText

The type of the input item. Always input_text.

type ResponseInputImage struct{…}

An image input to the model. Learn about image inputs.

Detail ResponseInputImageDetail

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
const ResponseInputImageDetailLow ResponseInputImageDetail = "low"
const ResponseInputImageDetailHigh ResponseInputImageDetail = "high"
const ResponseInputImageDetailAuto ResponseInputImageDetail = "auto"
Type InputImage

The type of the input item. Always input_image.

FileID stringoptional

The ID of the file to be sent to the model.

ImageURL stringoptional

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

type ResponseInputFile struct{…}

A file input to the model.

Type InputFile

The type of the input item. Always input_file.

FileData stringoptional

The content of the file to be sent to the model.

FileID stringoptional

The ID of the file to be sent to the model.

FileURL stringoptional

The URL of the file to be sent to the model.

Filename stringoptional

The name of the file to be sent to the model.

Version stringoptional

Optional version of the prompt template.

ToolChoice RealtimeSessionCreateResponseToolChoiceUnionoptional

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

Accepts one of the following:
type ToolChoiceOptions string

Controls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or more tools.

required means the model must call one or more tools.

Accepts one of the following:
const ToolChoiceOptionsNone ToolChoiceOptions = "none"
const ToolChoiceOptionsAuto ToolChoiceOptions = "auto"
const ToolChoiceOptionsRequired ToolChoiceOptions = "required"
type ToolChoiceFunction struct{…}

Use this option to force the model to call a specific function.

Name string

The name of the function to call.

Type Function

For function calling, the type is always function.

type ToolChoiceMcp struct{…}

Use this option to force the model to call a specific tool on a remote MCP server.

ServerLabel string

The label of the MCP server to use.

Type Mcp

For MCP tools, the type is always mcp.

Name stringoptional

The name of the tool to call on the server.

Tools []RealtimeSessionCreateResponseToolUnionoptional

Tools available to the model.

Accepts one of the following:
type RealtimeFunctionTool struct{…}
Description stringoptional

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

Name stringoptional

The name of the function.

Parameters anyoptional

Parameters of the function in JSON Schema.

Type RealtimeFunctionToolTypeoptional

The type of the tool, i.e. function.

type RealtimeSessionCreateResponseToolMcpTool struct{…}

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

ServerLabel string

A label for this MCP server, used to identify it in tool calls.

Type Mcp

The type of the MCP tool. Always mcp.

AllowedTools RealtimeSessionCreateResponseToolMcpToolAllowedToolsUnionoptional

List of allowed tool names or a filter object.

Accepts one of the following:
type RealtimeSessionCreateResponseToolMcpToolAllowedToolsMcpAllowedTools []string

A string array of allowed tool names

type RealtimeSessionCreateResponseToolMcpToolAllowedToolsMcpToolFilter struct{…}

A filter object to specify which tools are allowed.

ReadOnly booloptional

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

ToolNames []stringoptional

List of allowed tool names.

Authorization stringoptional

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

ConnectorID stringoptional

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorDropbox RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_dropbox"
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorGmail RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_gmail"
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorGooglecalendar RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_googlecalendar"
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorGoogledrive RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_googledrive"
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorMicrosoftteams RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_microsoftteams"
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorOutlookcalendar RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_outlookcalendar"
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorOutlookemail RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_outlookemail"
const RealtimeSessionCreateResponseToolMcpToolConnectorIDConnectorSharepoint RealtimeSessionCreateResponseToolMcpToolConnectorID = "connector_sharepoint"
Headers map[string, string]optional

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

RequireApproval RealtimeSessionCreateResponseToolMcpToolRequireApprovalUnionoptional

Specify which of the MCP server's tools require approval.

Accepts one of the following:
type RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalFilter struct{…}

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

Always RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalFilterAlwaysoptional

A filter object to specify which tools are allowed.

ReadOnly booloptional

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

ToolNames []stringoptional

List of allowed tool names.

Never RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalFilterNeveroptional

A filter object to specify which tools are allowed.

ReadOnly booloptional

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

ToolNames []stringoptional

List of allowed tool names.

type RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalSetting string

Specify a single approval policy for all tools. One of always or never. When set to always, all tools will require approval. When set to never, all tools will not require approval.

Accepts one of the following:
const RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalSettingAlways RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalSetting = "always"
const RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalSettingNever RealtimeSessionCreateResponseToolMcpToolRequireApprovalMcpToolApprovalSetting = "never"
ServerDescription stringoptional

Optional description of the MCP server, used to provide more context.

ServerURL stringoptional

The URL for the MCP server. One of server_url or connector_id must be provided.

Tracing RealtimeSessionCreateResponseTracingUnionoptional

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

Accepts one of the following:
type Auto string

Enables tracing and sets default values for tracing configuration options. Always auto.

type RealtimeSessionCreateResponseTracingTracingConfiguration struct{…}

Granular configuration for tracing.

GroupID stringoptional

The group id to attach to this trace to enable filtering and grouping in the Traces Dashboard.

Metadata anyoptional

The arbitrary metadata to attach to this trace to enable filtering in the Traces Dashboard.

WorkflowName stringoptional

The name of the workflow to attach to this trace. This is used to name the trace in the Traces Dashboard.

Truncation RealtimeTruncationUnionoptional

When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.

Accepts one of the following:
type RealtimeTruncationRealtimeTruncationStrategy string

The truncation strategy to use for the session. auto is the default truncation strategy. disabled will disable truncation and emit errors when the conversation exceeds the input token limit.

Accepts one of the following:
const RealtimeTruncationRealtimeTruncationStrategyAuto RealtimeTruncationRealtimeTruncationStrategy = "auto"
const RealtimeTruncationRealtimeTruncationStrategyDisabled RealtimeTruncationRealtimeTruncationStrategy = "disabled"
type RealtimeTruncationRetentionRatio struct{…}

Retain a fraction of the conversation tokens when the conversation exceeds the input token limit. This allows you to amortize truncations across multiple turns, which can help improve cached token usage.

RetentionRatio float64

Fraction of post-instruction conversation tokens to retain (0.0 - 1.0) when the conversation exceeds the input token limit. Setting this to 0.8 means that messages will be dropped until 80% of the maximum allowed tokens are used. This helps reduce the frequency of truncations and improve cache rates.

minimum0
maximum1
Type RetentionRatio

Use retention ratio truncation.

TokenLimits RealtimeTruncationRetentionRatioTokenLimitsoptional

Optional custom token limits for this truncation strategy. If not provided, the model's default token limits will be used.

PostInstructions int64optional

Maximum tokens allowed in the conversation after instructions (which including tool definitions). For example, setting this to 5,000 would mean that truncation would occur when the conversation exceeds 5,000 tokens after instructions. This cannot be higher than the model's context window size minus the maximum output tokens.

minimum0
type RealtimeTranscriptionSessionCreateResponse struct{…}

A Realtime transcription session configuration object.

ID string

Unique identifier for the session that looks like sess_1234567890abcdef.

Object string

The object type. Always realtime.transcription_session.

Type Transcription

The type of session. Always transcription for transcription sessions.

Audio RealtimeTranscriptionSessionCreateResponseAudiooptional

Configuration for input audio for the session.

Input RealtimeTranscriptionSessionCreateResponseAudioInputoptional

The PCM audio format. Only a 24kHz sample rate is supported.

Accepts one of the following:
type RealtimeAudioFormatsAudioPCM struct{…}

The PCM audio format. Only a 24kHz sample rate is supported.

Rate int64optional

The sample rate of the audio. Always 24000.

Type stringoptional

The audio format. Always audio/pcm.

type RealtimeAudioFormatsAudioPCMU struct{…}

The G.711 μ-law format.

Type stringoptional

The audio format. Always audio/pcmu.

type RealtimeAudioFormatsAudioPCMA struct{…}

The G.711 A-law format.

Type stringoptional

The audio format. Always audio/pcma.

NoiseReduction RealtimeTranscriptionSessionCreateResponseAudioInputNoiseReductionoptional

Configuration for input audio noise reduction.

Type NoiseReductionTypeoptional

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

Accepts one of the following:
const NoiseReductionTypeNearField NoiseReductionType = "near_field"
const NoiseReductionTypeFarField NoiseReductionType = "far_field"
Transcription AudioTranscriptionoptional

Configuration of the transcription model.

Language stringoptional

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

Model AudioTranscriptionModeloptional

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
string
type AudioTranscriptionModel string

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

Accepts one of the following:
const AudioTranscriptionModelWhisper1 AudioTranscriptionModel = "whisper-1"
const AudioTranscriptionModelGPT4oMiniTranscribe AudioTranscriptionModel = "gpt-4o-mini-transcribe"
const AudioTranscriptionModelGPT4oMiniTranscribe2025_12_15 AudioTranscriptionModel = "gpt-4o-mini-transcribe-2025-12-15"
const AudioTranscriptionModelGPT4oTranscribe AudioTranscriptionModel = "gpt-4o-transcribe"
const AudioTranscriptionModelGPT4oTranscribeDiarize AudioTranscriptionModel = "gpt-4o-transcribe-diarize"
Prompt stringoptional

An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology".

Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

PrefixPaddingMs int64optional

Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

SilenceDurationMs int64optional

Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

Threshold float64optional

Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

Type stringoptional

Type of turn detection, only server_vad is currently supported.

ExpiresAt int64optional

Expiration timestamp for the session, in seconds since epoch.

Include []stringoptional

Additional fields to include in server outputs.

  • item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
Value string

The generated client secret value.

Create client secret

package main

import (
  "context"
  "fmt"

  "github.com/openai/openai-go"
  "github.com/openai/openai-go/option"
  "github.com/openai/openai-go/realtime"
)

func main() {
  client := openai.NewClient(
    option.WithAPIKey("My API Key"),
  )
  clientSecret, err := client.Realtime.ClientSecrets.New(context.TODO(), realtime.ClientSecretNewParams{

  })
  if err != nil {
    panic(err.Error())
  }
  fmt.Printf("%+v\n", clientSecret.ExpiresAt)
}
{
  "expires_at": 0,
  "session": {
    "client_secret": {
      "expires_at": 0,
      "value": "value"
    },
    "type": "realtime",
    "audio": {
      "input": {
        "format": {
          "rate": 24000,
          "type": "audio/pcm"
        },
        "noise_reduction": {
          "type": "near_field"
        },
        "transcription": {
          "language": "language",
          "model": "string",
          "prompt": "prompt"
        },
        "turn_detection": {
          "type": "server_vad",
          "create_response": true,
          "idle_timeout_ms": 5000,
          "interrupt_response": true,
          "prefix_padding_ms": 0,
          "silence_duration_ms": 0,
          "threshold": 0
        }
      },
      "output": {
        "format": {
          "rate": 24000,
          "type": "audio/pcm"
        },
        "speed": 0.25,
        "voice": "ash"
      }
    },
    "include": [
      "item.input_audio_transcription.logprobs"
    ],
    "instructions": "instructions",
    "max_output_tokens": 0,
    "model": "string",
    "output_modalities": [
      "text"
    ],
    "prompt": {
      "id": "id",
      "variables": {
        "foo": "string"
      },
      "version": "version"
    },
    "tool_choice": "none",
    "tools": [
      {
        "description": "description",
        "name": "name",
        "parameters": {},
        "type": "function"
      }
    ],
    "tracing": "auto",
    "truncation": "auto"
  },
  "value": "value"
}
Returns Examples
{
  "expires_at": 0,
  "session": {
    "client_secret": {
      "expires_at": 0,
      "value": "value"
    },
    "type": "realtime",
    "audio": {
      "input": {
        "format": {
          "rate": 24000,
          "type": "audio/pcm"
        },
        "noise_reduction": {
          "type": "near_field"
        },
        "transcription": {
          "language": "language",
          "model": "string",
          "prompt": "prompt"
        },
        "turn_detection": {
          "type": "server_vad",
          "create_response": true,
          "idle_timeout_ms": 5000,
          "interrupt_response": true,
          "prefix_padding_ms": 0,
          "silence_duration_ms": 0,
          "threshold": 0
        }
      },
      "output": {
        "format": {
          "rate": 24000,
          "type": "audio/pcm"
        },
        "speed": 0.25,
        "voice": "ash"
      }
    },
    "include": [
      "item.input_audio_transcription.logprobs"
    ],
    "instructions": "instructions",
    "max_output_tokens": 0,
    "model": "string",
    "output_modalities": [
      "text"
    ],
    "prompt": {
      "id": "id",
      "variables": {
        "foo": "string"
      },
      "version": "version"
    },
    "tool_choice": "none",
    "tools": [
      {
        "description": "description",
        "name": "name",
        "parameters": {},
        "type": "function"
      }
    ],
    "tracing": "auto",
    "truncation": "auto"
  },
  "value": "value"
}