Skip to content
Primary navigation

Realtime

RealtimeClient Secrets

resource openai_realtime_client_secret

optional Expand Collapse
expires_after?: Attributes

Configuration for the client secret expiration. Expiration refers to the time after which a client secret will no longer be valid for creating sessions. The session itself may continue after that time once started. A secret can be used to create multiple sessions until it expires.

anchor?: String

The anchor point for the client secret expiration, meaning that seconds will be added to the created_at time of the client secret to produce an expiration timestamp. Only created_at is currently supported.

seconds?: Int64

The number of seconds from the anchor point to the expiration. Select a value between 10 and 7200 (2 hours). This default to 600 seconds (10 minutes) if not specified.

session?: Attributes

Session configuration to use for the client secret. Choose either a realtime session or a transcription session.

type: String

The type of session to create. Always realtime for the Realtime API.

audio?: Attributes

Configuration for input and output audio.

input?: Attributes
format?: Attributes

The format of the input audio.

rate?: Int64

The sample rate of the audio. Always 24000.

type?: String

The audio format. Always audio/pcm.

noise_reduction?: Attributes

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

type?: String

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

transcription?: Attributes

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

language?: String

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

model?: String

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

prompt?: String

An optional text to guide the model’s style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example “expect words related to technology”.

turn_detection?: Attributes

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with “uhhm”, the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

type?: String

Type of turn detection, server_vad to turn on simple Server VAD.

create_response?: Bool

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

idle_timeout_ms?: Int64

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response’s audio has finished playing, i.e. it’s set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

interrupt_response?: Bool

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

prefix_padding_ms?: Int64

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: Int64

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: Float64

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

eagerness?: String

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

output?: Attributes
format?: Attributes

The format of the output audio.

rate?: Int64

The sample rate of the audio. Always 24000.

type?: String

The audio format. Always audio/pcm.

speed?: Float64

The speed of the model’s spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

This parameter is a post-processing adjustment to the audio after it is generated, it’s also possible to prompt the model to speak faster or slower.

voice?: String

The voice the model uses to respond. Supported built-in voices are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. You may also provide a custom voice object with an id, for example { "id": "voice_1234" }. Voice cannot be changed during the session once the model has responded with audio at least once. We recommend marin and cedar for best quality.

include?: List[String]

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: String

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. “be extremely succinct”, “act friendly”, “here are examples of good responses”) and on audio behavior (e.g. “talk quickly”, “inject emotion into your voice”, “laugh frequently”). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: Dynamic Int64 | String

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

model?: String

The Realtime model used for this session.

output_modalities?: List[String]

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

prompt?: Attributes

Reference to a prompt template and its variables. Learn more.

id: String

The unique identifier of the prompt template to use.

variables?: Map[String]

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

version?: String

Optional version of the prompt template.

tool_choice?: String

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: List[Attributes]

Tools available to the model.

description?: String

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: String

The name of the function.

parameters?: JSON

Parameters of the function in JSON Schema.

type?: String

The type of the tool, i.e. function.

server_label?: String

A label for this MCP server, used to identify it in tool calls.

allowed_tools?: List[String]

List of allowed tool names or a filter object.

authorization?: String

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: String

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
defer_loading?: Bool

Whether this MCP tool is deferred and discovered via tool search.

headers?: Map[String]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: Attributes

Specify which of the MCP server’s tools require approval.

always?: Attributes

A filter object to specify which tools are allowed.

read_only?: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: List[String]

List of allowed tool names.

never?: Attributes

A filter object to specify which tools are allowed.

read_only?: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: List[String]

List of allowed tool names.

server_description?: String

Optional description of the MCP server, used to provide more context.

server_url?: String

The URL for the MCP server. One of server_url or connector_id must be provided.

tracing?: String

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: String

When the number of tokens in a conversation exceeds the model’s input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model’s context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model’s input token limit.

computed Expand Collapse
expires_at: Int64

Expiration timestamp for the client secret, in seconds since epoch.

value: String

The generated client secret value.

openai_realtime_client_secret

resource "openai_realtime_client_secret" "example_realtime_client_secret" {
  expires_after = {
    anchor = "created_at"
    seconds = 10
  }
  session = {
    type = "realtime"
    audio = {
      input = {
        format = {
          rate = 24000
          type = "audio/pcm"
        }
        noise_reduction = {
          type = "near_field"
        }
        transcription = {
          language = "language"
          model = "string"
          prompt = "prompt"
        }
        turn_detection = {
          type = "server_vad"
          create_response = true
          idle_timeout_ms = 5000
          interrupt_response = true
          prefix_padding_ms = 0
          silence_duration_ms = 0
          threshold = 0
        }
      }
      output = {
        format = {
          rate = 24000
          type = "audio/pcm"
        }
        speed = 0.25
        voice = "string"
      }
    }
    include = ["item.input_audio_transcription.logprobs"]
    instructions = "instructions"
    max_output_tokens = 0
    model = "string"
    output_modalities = ["text"]
    prompt = {
      id = "id"
      variables = {
        foo = "string"
      }
      version = "version"
    }
    tool_choice = "none"
    tools = [{
      description = "description"
      name = "name"
      parameters = {

      }
      type = "function"
    }]
    tracing = "auto"
    truncation = "auto"
  }
}

RealtimeCalls

resource openai_realtime_call

required Expand Collapse
sdp: String

WebRTC Session Description Protocol (SDP) offer generated by the caller.

optional Expand Collapse
session?: Attributes

Realtime session object configuration.

type: String

The type of session to create. Always realtime for the Realtime API.

audio?: Attributes

Configuration for input and output audio.

input?: Attributes
format?: Attributes

The format of the input audio.

rate?: Int64

The sample rate of the audio. Always 24000.

type?: String

The audio format. Always audio/pcm.

noise_reduction?: Attributes

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

type?: String

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

transcription?: Attributes

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

language?: String

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

model?: String

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

prompt?: String

An optional text to guide the model’s style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example “expect words related to technology”.

turn_detection?: Attributes

Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response.

Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

Semantic VAD is more advanced and uses a turn detection model (in conjunction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with “uhhm”, the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

type?: String

Type of turn detection, server_vad to turn on simple Server VAD.

create_response?: Bool

Whether or not to automatically generate a response when a VAD stop event occurs. If interrupt_response is set to false this may fail to create a response if the model is already responding.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

idle_timeout_ms?: Int64

Optional timeout after which a model response will be triggered automatically. This is useful for situations in which a long pause from the user is unexpected, such as a phone call. The model will effectively prompt the user to continue the conversation based on the current context.

The timeout value will be applied after the last model response’s audio has finished playing, i.e. it’s set to the response.done time plus audio playback duration.

An input_audio_buffer.timeout_triggered event (plus events associated with the Response) will be emitted when the timeout is reached. Idle timeout is currently only supported for server_vad mode.

interrupt_response?: Bool

Whether or not to automatically interrupt (cancel) any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. If true then the response will be cancelled, otherwise it will continue until complete.

If both create_response and interrupt_response are set to false, the model will never respond automatically but VAD events will still be emitted.

prefix_padding_ms?: Int64

Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: Int64

Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: Float64

Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

eagerness?: String

Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium. low, medium, and high have max timeouts of 8s, 4s, and 2s respectively.

output?: Attributes
format?: Attributes

The format of the output audio.

rate?: Int64

The sample rate of the audio. Always 24000.

type?: String

The audio format. Always audio/pcm.

speed?: Float64

The speed of the model’s spoken response as a multiple of the original speed. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

This parameter is a post-processing adjustment to the audio after it is generated, it’s also possible to prompt the model to speak faster or slower.

voice?: String

The voice the model uses to respond. Supported built-in voices are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. You may also provide a custom voice object with an id, for example { "id": "voice_1234" }. Voice cannot be changed during the session once the model has responded with audio at least once. We recommend marin and cedar for best quality.

include?: List[String]

Additional fields to include in server outputs.

item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.

instructions?: String

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. “be extremely succinct”, “act friendly”, “here are examples of good responses”) and on audio behavior (e.g. “talk quickly”, “inject emotion into your voice”, “laugh frequently”). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

max_output_tokens?: Dynamic Int64 | String

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

model?: String

The Realtime model used for this session.

output_modalities?: List[String]

The set of modalities the model can respond with. It defaults to ["audio"], indicating that the model will respond with audio plus a transcript. ["text"] can be used to make the model respond with text only. It is not possible to request both text and audio at the same time.

prompt?: Attributes

Reference to a prompt template and its variables. Learn more.

id: String

The unique identifier of the prompt template to use.

variables?: Map[String]

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

version?: String

Optional version of the prompt template.

tool_choice?: String

How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool.

tools?: List[Attributes]

Tools available to the model.

description?: String

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: String

The name of the function.

parameters?: JSON

Parameters of the function in JSON Schema.

type?: String

The type of the tool, i.e. function.

server_label?: String

A label for this MCP server, used to identify it in tool calls.

allowed_tools?: List[String]

List of allowed tool names or a filter object.

authorization?: String

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id?: String

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
defer_loading?: Bool

Whether this MCP tool is deferred and discovered via tool search.

headers?: Map[String]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval?: Attributes

Specify which of the MCP server’s tools require approval.

always?: Attributes

A filter object to specify which tools are allowed.

read_only?: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: List[String]

List of allowed tool names.

never?: Attributes

A filter object to specify which tools are allowed.

read_only?: Bool

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names?: List[String]

List of allowed tool names.

server_description?: String

Optional description of the MCP server, used to provide more context.

server_url?: String

The URL for the MCP server. One of server_url or connector_id must be provided.

tracing?: String

Realtime API can write session traces to the Traces Dashboard. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: String

When the number of tokens in a conversation exceeds the model’s input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model’s context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model’s input token limit.

openai_realtime_call

resource "openai_realtime_call" "example_realtime_call" {
  sdp = "sdp"
  session = {
    type = "realtime"
    audio = {
      input = {
        format = {
          rate = 24000
          type = "audio/pcm"
        }
        noise_reduction = {
          type = "near_field"
        }
        transcription = {
          language = "language"
          model = "string"
          prompt = "prompt"
        }
        turn_detection = {
          type = "server_vad"
          create_response = true
          idle_timeout_ms = 5000
          interrupt_response = true
          prefix_padding_ms = 0
          silence_duration_ms = 0
          threshold = 0
        }
      }
      output = {
        format = {
          rate = 24000
          type = "audio/pcm"
        }
        speed = 0.25
        voice = "string"
      }
    }
    include = ["item.input_audio_transcription.logprobs"]
    instructions = "instructions"
    max_output_tokens = 0
    model = "string"
    output_modalities = ["text"]
    prompt = {
      id = "id"
      variables = {
        foo = "string"
      }
      version = "version"
    }
    tool_choice = "none"
    tools = [{
      description = "description"
      name = "name"
      parameters = {

      }
      type = "function"
    }]
    tracing = "auto"
    truncation = "auto"
  }
}

RealtimeSessions

resource openai_realtime_session

required Expand Collapse
client_secret: Attributes

Ephemeral key returned by the API.

expires_at: Int64

Timestamp for when the token expires. Currently, all tokens expire after one minute.

value: String

Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.

optional Expand Collapse
input_audio_format?: String

The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.

instructions?: String

The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. “be extremely succinct”, “act friendly”, “here are examples of good responses”) and on audio behavior (e.g. “talk quickly”, “inject emotion into your voice”, “laugh frequently”). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

output_audio_format?: String

The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.

temperature?: Float64

Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8.

tool_choice?: String

How the model chooses tools. Options are auto, none, required, or specify a function.

tracing?: String

Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified.

auto will create a trace for the session with default values for the workflow name, group id, and metadata.

truncation?: String

When the number of tokens in a conversation exceeds the model’s input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model’s context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.

Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost.

Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate.

Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model’s input token limit.

voice?: String

The voice the model uses to respond. Supported built-in voices are alloy, ash, ballad, coral, echo, sage, shimmer, verse, marin, and cedar. You may also provide a custom voice object with an id, for example { "id": "voice_1234" }. Voice cannot be changed during the session once the model has responded with audio at least once.

modalities?: List[String]

The set of modalities the model can respond with. To disable audio, set this to [“text”].

input_audio_transcription?: Attributes

Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously and should be treated as rough guidance rather than the representation understood by the model.

model?: String

The model to use for transcription.

prompt?: Attributes

Reference to a prompt template and its variables. Learn more.

id: String

The unique identifier of the prompt template to use.

variables?: Map[String]

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.

version?: String

Optional version of the prompt template.

tools?: List[Attributes]

Tools (functions) available to the model.

description?: String

The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

name?: String

The name of the function.

parameters?: JSON

Parameters of the function in JSON Schema.

type?: String

The type of the tool, i.e. function.

turn_detection?: Attributes

Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

prefix_padding_ms?: Int64

Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: Int64

Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: Float64

Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

type?: String

Type of turn detection, only server_vad is currently supported.

max_response_output_tokens?: Dynamic Int64 | String

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

speed?: Float64

The speed of the model’s spoken response. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress.

computed Expand Collapse
id: String

Unique identifier for the session that looks like sess_1234567890abcdef.

expires_at: Int64

Expiration timestamp for the session, in seconds since epoch.

model: String

The Realtime model used for this session.

object: String

The object type. Always realtime.session.

include: List[String]

Additional fields to include in server outputs.

  • item.input_audio_transcription.logprobs: Include logprobs for input audio transcription.
output_modalities: List[String]

The set of modalities the model can respond with. To disable audio, set this to [“text”].

audio: Attributes

Configuration for input and output audio for the session.

input: Attributes
format: Attributes

The PCM audio format. Only a 24kHz sample rate is supported.

rate: Int64

The sample rate of the audio. Always 24000.

type: String

The audio format. Always audio/pcm.

noise_reduction: Attributes

Configuration for input audio noise reduction.

type: String

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

transcription: Attributes

Configuration for input audio transcription.

language: String

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

model: String

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

prompt: String

An optional text to guide the model’s style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example “expect words related to technology”.

turn_detection: Attributes

Configuration for turn detection.

prefix_padding_ms: Int64
silence_duration_ms: Int64
threshold: Float64
type: String

Type of turn detection, only server_vad is currently supported.

output: Attributes
format: Attributes

The PCM audio format. Only a 24kHz sample rate is supported.

rate: Int64

The sample rate of the audio. Always 24000.

type: String

The audio format. Always audio/pcm.

speed: Float64
voice: String
max_output_tokens: Dynamic Int64 | String

Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

RealtimeTranscription Sessions

resource openai_realtime_transcription_session

optional Expand Collapse
include?: List[String]

The set of items to include in the transcription. Current available items are: item.input_audio_transcription.logprobs

input_audio_noise_reduction?: Attributes

Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

type?: String

Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

input_audio_transcription?: Attributes

Configuration for input audio transcription. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

language?: String

The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

model?: String

The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels.

prompt?: String

An optional text to guide the model’s style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example “expect words related to technology”.

turn_detection?: Attributes

Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

prefix_padding_ms?: Int64

Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

silence_duration_ms?: Int64

Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

threshold?: Float64

Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

type?: String

Type of turn detection. Only server_vad is currently supported for transcription sessions.

input_audio_format?: String

The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.

computed Expand Collapse
modalities: List[String]

The set of modalities the model can respond with. To disable audio, set this to [“text”].

client_secret: Attributes

Ephemeral key returned by the API. Only present when the session is created on the server via REST API.

expires_at: Int64

Timestamp for when the token expires. Currently, all tokens expire after one minute.

value: String

Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.