Voice agents turn the same agent concepts into spoken, low-latency interactions. The key design choice is deciding whether the model should work directly with live audio or whether your application should explicitly chain speech-to-text, text reasoning, and text-to-speech.
Choose the right architecture
| Architecture | Best for | Why |
|---|---|---|
| Speech-to-speech with live audio sessions | Natural, low-latency conversations | The model handles live audio input and output directly |
| Chained voice pipeline | Predictable workflows or extending an existing text agent | Your app keeps explicit control over transcription, text reasoning, and speech output |
Agent Builder doesn’t currently support voice workflows, so voice stays an SDK-first surface.
Recommended starting points
The two supported languages expose different strengths today:
- In TypeScript, the fastest path to a browser-based voice assistant is a
RealtimeAgentandRealtimeSession. - In Python, the simplest path to extending an existing text agent into voice is a chained
VoicePipeline.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import { RealtimeAgent, RealtimeSession } from "@openai/agents/realtime";
const agent = new RealtimeAgent({
name: "Assistant",
instructions: "You are a helpful voice assistant.",
});
const session = new RealtimeSession(agent, {
model: "gpt-realtime-1.5",
});
await session.connect({
apiKey: "ek_...(ephemeral key from your server)",
});Build a speech-to-speech voice agent
Use the live audio API path when the interaction should feel conversational and immediate. The usual browser flow is:
- Your application server creates an ephemeral client secret for the live audio session.
- Your frontend creates a
RealtimeSession. - The session connects over WebRTC in the browser or WebSocket on the server.
- The agent handles audio turns, tools, interruptions, and handoffs inside that session.
Start with the transport docs when you need lower-level control:
Build a chained voice workflow
Use the chained path when you want stronger control over intermediate text, existing text-agent reuse, or a simpler extension path from a non-voice workflow. In that design, your application explicitly manages:
- speech-to-text
- the agent workflow itself
- text-to-speech
This is often the better fit for support flows, approval-heavy flows, or cases where you want durable transcripts and deterministic logic between each stage.
Voice agents still use the same core agent building blocks
The voice surface changes the transport and audio loop, but the core workflow decisions are the same:
- Use Using tools when the voice agent needs external capabilities.
- Use Running agents when spoken workflows need streaming, continuation, or durable state.
- Use Orchestration and handoffs when spoken workflows branch across specialists.
- Use Guardrails and human review when spoken workflows need safety checks or approvals.
- Use Integrations and observability when you need MCP-backed capabilities or want to inspect how the voice workflow behaved.
The practical rule is: choose the audio architecture first, then design the rest of the agent workflow the same way you would for text.