Sandbox agents are now available in the Python Agents SDK. Use them when your agent needs a container-based environment with files, commands, packages, ports, snapshots, and memory. Read the Sandbox agents guide.
Agents are applications that plan, call tools, collaborate across specialists, and keep enough state to complete multi-step work.
- Use the OpenAI client libraries when you want direct API clients for model requests.
- Use the Agents SDK pages when your application owns orchestration, tool execution, approvals, and state.
- Use Agent Builder only when you specifically want the hosted workflow editor and ChatKit path.
Get the Agents SDK
Use the GitHub repositories for installation, issues, examples, and language-specific reference details.
Open the TypeScript SDK repository on GitHub.
Open the Python SDK repository on GitHub.
Choose your starting point
| If you want to | Start here | Why |
|---|---|---|
| Build a code-first agent app | Quickstart | This is the shortest path to a working SDK integration. |
| Define one specialist cleanly | Agent definitions | Start here when you are still shaping the contract for a single agent. |
| Choose models, defaults, and transport | Models and providers | Use this when model choice, provider setup, or transport strategy affects the workflow. |
| Understand the runtime loop and state | Running agents | This is where the agent loop, streaming, and continuation strategies live. |
| Run work in a container-based environment | Sandbox agents | Use this when the agent needs files, commands, packages, snapshots, mounts, or provider links. |
| Design specialist ownership | Orchestration and handoffs | Use this when you need more than one agent and must decide who owns the reply. |
| Add validation or human review | Guardrails and human review | Use this when the workflow should block or pause before risky work continues. |
| Understand what a run returns | Results and state | This page explains final output, resumable state, and next-turn surfaces. |
| Add hosted tools, function tools, or MCP | Using tools and Integrations and observability | Tool semantics live in the platform tools docs; SDK-specific MCP and tracing live here. |
| Inspect and improve runs | Integrations and observability and evaluate agent workflows | Use traces for debugging first, then move into evaluation loops. |
| Build a voice-first workflow | Voice agents | Voice is still an SDK-first path because Agent Builder doesn’t support it. |
Build with the SDK
Use the SDK track when your server owns orchestration, tool execution, state, and approvals. That path is the best fit when you want:
- typed application code in TypeScript or Python
- direct control over tools, MCP servers, and runtime behavior
- custom storage or server-managed conversation strategies
- tight integration with existing product logic or infrastructure
A typical SDK reading order is:
- Start with Quickstart to get one working run on screen.
- Use Agent definitions and Models and providers to shape one specialist cleanly.
- Continue to Running agents, Orchestration and handoffs, and Guardrails and human review as the workflow grows more complex.
- Use Results and state and Integrations and observability when application logic depends on the run object or deeper visibility into behavior.
Use Agent Builder for the hosted workflow path
Use Agent Builder when you want OpenAI-hosted workflow creation, publishing, and ChatKit deployment. Those pages stay grouped together because they describe one product surface: building a workflow in the visual editor, publishing versions, embedding them, customizing the UI, and evaluating the results.
Voice agents are an exception: they live in the SDK track because Agent Builder doesn’t currently support voice workflows. Use Voice agents when you need speech-to-speech or chained voice pipelines.