Primary navigation

Legacy APIs

Agents SDK

Build agents in code with the OpenAI Agents SDK and grow into more advanced runtime patterns as needed.

Sandbox agents are now available in the Python Agents SDK. Use them when your agent needs a container-based environment with files, commands, packages, ports, snapshots, and memory. Read the Sandbox agents guide.

Agents are applications that plan, call tools, collaborate across specialists, and keep enough state to complete multi-step work.

  • Use the OpenAI client libraries when you want direct API clients for model requests.
  • Use the Agents SDK pages when your application owns orchestration, tool execution, approvals, and state.
  • Use Agent Builder only when you specifically want the hosted workflow editor and ChatKit path.

Get the Agents SDK

Use the GitHub repositories for installation, issues, examples, and language-specific reference details.

Choose your starting point

If you want toStart hereWhy
Build a code-first agent appQuickstartThis is the shortest path to a working SDK integration.
Define one specialist cleanlyAgent definitionsStart here when you are still shaping the contract for a single agent.
Choose models, defaults, and transportModels and providersUse this when model choice, provider setup, or transport strategy affects the workflow.
Understand the runtime loop and stateRunning agentsThis is where the agent loop, streaming, and continuation strategies live.
Run work in a container-based environmentSandbox agentsUse this when the agent needs files, commands, packages, snapshots, mounts, or provider links.
Design specialist ownershipOrchestration and handoffsUse this when you need more than one agent and must decide who owns the reply.
Add validation or human reviewGuardrails and human reviewUse this when the workflow should block or pause before risky work continues.
Understand what a run returnsResults and stateThis page explains final output, resumable state, and next-turn surfaces.
Add hosted tools, function tools, or MCPUsing tools and Integrations and observabilityTool semantics live in the platform tools docs; SDK-specific MCP and tracing live here.
Inspect and improve runsIntegrations and observability and evaluate agent workflowsUse traces for debugging first, then move into evaluation loops.
Build a voice-first workflowVoice agentsVoice is still an SDK-first path because Agent Builder doesn’t support it.

Build with the SDK

Use the SDK track when your server owns orchestration, tool execution, state, and approvals. That path is the best fit when you want:

  • typed application code in TypeScript or Python
  • direct control over tools, MCP servers, and runtime behavior
  • custom storage or server-managed conversation strategies
  • tight integration with existing product logic or infrastructure

A typical SDK reading order is:

Use Agent Builder for the hosted workflow path

Use Agent Builder when you want OpenAI-hosted workflow creation, publishing, and ChatKit deployment. Those pages stay grouped together because they describe one product surface: building a workflow in the visual editor, publishing versions, embedding them, customizing the UI, and evaluating the results.

Voice agents are an exception: they live in the SDK track because Agent Builder doesn’t currently support voice workflows. Use Voice agents when you need speech-to-speech or chained voice pipelines.