Primary navigation

Legacy APIs

Voice agents

Build voice agents with either speech-to-speech sessions or chained voice pipelines.

Voice agents turn the same agent concepts into spoken, low-latency interactions. The key design choice is deciding whether the model should work directly with live audio or whether your application should explicitly chain speech-to-text, text reasoning, and text-to-speech.

Choose the right architecture

ArchitectureBest forWhy
Speech-to-speech with live audio sessionsNatural, low-latency conversationsThe model handles live audio input and output directly
Chained voice pipelinePredictable workflows or extending an existing text agentYour app keeps explicit control over transcription, text reasoning, and speech output

Agent Builder doesn’t currently support voice workflows, so voice stays an SDK-first surface.

The examples below are intentionally different architectures, not matching language tabs. The TypeScript and Python libraries expose different voice helpers today:

  • In TypeScript, the fastest path to a browser-based voice assistant is a RealtimeAgent and RealtimeSession.
  • In Python, the simplest path to extending an existing text agent into voice is a chained VoicePipeline.

Build a speech-to-speech voice agent

Use the live audio API path when the interaction should feel conversational and immediate. This is the best starting point for voice agents that need barge-in, low first-audio latency, natural turn taking, and realtime tool use.

The usual browser flow is:

  1. Your application server creates an ephemeral client secret for the live audio session.
  2. Your frontend creates a RealtimeSession.
  3. The session connects over WebRTC in the browser or WebSocket on the server.
  4. The agent handles audio turns, tools, interruptions, and handoffs inside that session.
Start a realtime voice session
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import { RealtimeAgent, RealtimeSession } from "@openai/agents/realtime";

const agent = new RealtimeAgent({
  name: "Assistant",
  instructions: "You are a helpful voice assistant.",
});

const session = new RealtimeSession(agent, {
  model: "gpt-realtime-2",
});

await session.connect({
  apiKey: "ek_...(ephemeral key from your server)",
});

From there, attach tools, handoffs, and guardrails to the RealtimeAgent the same way you would attach them to a text agent. Keep audio transport concerns in the session layer, and keep business logic in the agent definition.

Start with the transport docs when you need lower-level control:

Build a chained voice workflow

Use the chained path when you want stronger control over intermediate text, existing text-agent reuse, or a simpler extension path from a non-voice workflow. In that design, your application explicitly manages:

  1. speech-to-text
  2. the agent workflow itself
  3. text-to-speech

This is often the better fit for support flows, approval-heavy flows, or cases where you want durable transcripts and deterministic logic between each stage.

Run a chained voice pipeline
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import asyncio
import numpy as np

from agents import Agent, function_tool
from agents.voice import AudioInput, SingleAgentVoiceWorkflow, VoicePipeline


@function_tool
def get_weather(city: str) -> str:
    """Get the weather for a given city."""
    return f"The weather in {city} is sunny."


agent = Agent(
    name="Assistant",
    instructions="You are a helpful voice assistant.",
    model="gpt-5.5",
    tools=[get_weather],
)


async def main() -> None:
    pipeline = VoicePipeline(workflow=SingleAgentVoiceWorkflow(agent))
    audio_input = AudioInput(buffer=np.zeros(24000 * 3, dtype=np.int16))
    result = await pipeline.run(audio_input)
    async for event in result.stream():
        if event.type == "voice_stream_event_audio":
            print("Received audio bytes", len(event.data))


if __name__ == "__main__":
    asyncio.run(main())

Use this path when each stage needs to be visible or replaceable. For example, you might store the transcript, run policy checks before the text agent responds, call internal systems, then generate speech only after the workflow reaches an approved answer.

Voice agents still use the same core agent building blocks

The voice surface changes the transport and audio loop, but the core workflow decisions are the same:

The practical rule is: choose the audio architecture first, then design the rest of the agent workflow the same way you would for text.

Next steps

Realtime and audio overview

Choose the right realtime or audio guide for your use case.

Managing conversations

Work with the Realtime session lifecycle and event model.

WebRTC connection

Connect browser and mobile audio directly to a Realtime session.

Realtime prompting guide

Tune reasoning, preambles, tools, entity capture, and voice behavior.