Codex app-server is the interface Codex uses to power rich clients (for example, the Codex VS Code extension). Use it when you want a deep integration inside your own product: authentication, conversation history, approvals, and streamed agent events. The app-server implementation is open source in the Codex GitHub repository (openai/codex/codex-rs/app-server). See the Open Source page for the full list of open-source Codex components.
If you are automating jobs or running Codex in CI, use the Codex SDK instead.
Protocol
Like MCP, codex app-server supports bidirectional communication and streams JSONL over stdio. The protocol is JSON-RPC 2.0, but it omits the "jsonrpc":"2.0" header.
Message schema
Requests include method, params, and id:
{ "method": "thread/start", "id": 10, "params": { "model": "gpt-5.1-codex" } }
Responses echo the id with either result or error:
{ "id": 10, "result": { "thread": { "id": "thr_123" } } }
{ "id": 10, "error": { "code": 123, "message": "Something went wrong" } }
Notifications omit id and use only method and params:
{ "method": "turn/started", "params": { "turn": { "id": "turn_456" } } }
You can generate a TypeScript schema or a JSON Schema bundle from the CLI. Each output is specific to the Codex version you ran, so the generated artifacts match that version exactly:
codex app-server generate-ts --out ./schemas
codex app-server generate-json-schema --out ./schemas
Getting started
- Start the server with
codex app-server. It waits for JSONL over standard input and prints only protocol messages. - Connect a client over stdio, then send
initializefollowed by theinitializednotification. - Start a thread and a turn, then keep reading notifications from stdout.
Example (Node.js / TypeScript):
import { spawn } from "node:child_process";
import readline from "node:readline";
const proc = spawn("codex", ["app-server"], {
stdio: ["pipe", "pipe", "inherit"],
});
const rl = readline.createInterface({ input: proc.stdout });
const send = (message: unknown) => {
proc.stdin.write(`${JSON.stringify(message)}\n`);
};
let threadId: string | null = null;
rl.on("line", (line) => {
const msg = JSON.parse(line) as any;
console.log("server:", msg);
if (msg.id === 1 && msg.result?.thread?.id && !threadId) {
threadId = msg.result.thread.id;
send({
method: "turn/start",
id: 2,
params: {
threadId,
input: [{ type: "text", text: "Summarize this repo." }],
},
});
}
});
send({
method: "initialize",
id: 0,
params: {
clientInfo: {
name: "my_product",
title: "My Product",
version: "0.1.0",
},
},
});
send({ method: "initialized", params: {} });
send({ method: "thread/start", id: 1, params: { model: "gpt-5.1-codex" } });
Core primitives
- Thread: A conversation between a user and the Codex agent. Threads contain turns.
- Turn: A single user request and the agent work that follows. Turns contain items and stream incremental updates.
- Item: A unit of input or output (user message, agent message, command runs, file change, tool call, and more).
Use the thread APIs to create, list, or archive conversations. Drive a conversation with turn APIs and stream progress via turn notifications.
Lifecycle overview
- Initialize once: Immediately after launching
codex app-server, send aninitializerequest with your client metadata, then emitinitialized. The server rejects any request before this handshake. - Start (or resume) a thread: Call
thread/startfor a new conversation,thread/resumeto continue an existing one, orthread/forkto branch history into a new thread id. - Begin a turn: Call
turn/startwith the targetthreadIdand user input. Optional fields override model,cwd, sandbox policy, and more. - Stream events: After
turn/start, keep reading notifications on stdout:item/started,item/completed,item/agentMessage/delta, tool progress, and other updates. - Finish the turn: The server emits
turn/completedwith final status when the model finishes or after aturn/interruptcancellation.
Initialization
Clients must send a single initialize request before invoking any other method, then acknowledge with an initialized notification. Requests sent before initialization receive a Not initialized error, and repeated initialize calls return Already initialized.
The server returns the user agent string it will present to upstream services. Set clientInfo to identify your integration.
Important: Use clientInfo.name to identify your client for the OpenAI Compliance Logs Platform. If you are developing a new Codex integration intended for enterprise use, please contact OpenAI to get it added to a known clients list. For more context, see the Codex logs reference.
Example (from the Codex VS Code extension):
{
"method": "initialize",
"id": 0,
"params": {
"clientInfo": {
"name": "codex_vscode",
"title": "Codex VS Code Extension",
"version": "0.1.0"
}
}
}
API overview
thread/start- create a new thread; emitsthread/startedand automatically subscribes you to turn/item events for that thread.thread/resume- reopen an existing thread by id so laterturn/startcalls append to it.thread/fork- fork a thread into a new thread id by copying stored history; emitsthread/startedfor the new thread.thread/list- page through stored thread logs; supports cursor-based pagination and optionalmodelProvidersfiltering.thread/loaded/list- list the thread ids currently loaded in memory.thread/archive- move a thread’s log file into the archived directory; returns{}on success.thread/rollback- drop the last N turns from the in-memory context and persist a rollback marker; returns the updatedthread.turn/start- add user input to a thread and begin Codex generation; responds with the initialturnand streams events.turn/interrupt- request cancellation of an in-flight turn; success is{}and the turn ends withstatus: "interrupted".review/start- kick off the Codex reviewer for a thread; emitsenteredReviewModeandexitedReviewModeitems.command/exec- run a single command under the server sandbox without starting a thread/turn.model/list- list available models (with effort options).collaborationMode/list- list collaboration mode presets (experimental, no pagination).skills/list- list skills for one or morecwdvalues (optionalforceReload).skills/config/write- enable or disable skills by path.mcpServer/oauth/login- start an OAuth login for a configured MCP server; returns an authorization URL and emitsmcpServer/oauthLogin/completedon completion.tool/requestUserInput- prompt the user with 1-3 short questions for a tool call (experimental).config/mcpServer/reload- reload MCP server configuration from disk and queue a refresh for loaded threads.mcpServerStatus/list- list MCP servers, tools, resources, and auth status (cursor + limit pagination).feedback/upload- submit a feedback report (classification + optional reason/logs + conversation id).config/read- fetch the effective configuration on disk after resolving configuration layering.config/value/write- write a single configuration key/value to the user’sconfig.tomlon disk.config/batchWrite- apply configuration edits atomically to the user’sconfig.tomlon disk.configRequirements/read- fetch requirements allow-lists fromrequirements.tomland/or MDM (ornullif you haven’t set any up).
Threads
thread/listsupports cursor pagination and optionalmodelProvidersfiltering.thread/loaded/listreturns the thread IDs currently in memory.thread/archivemoves the thread’s persisted JSONL log into the archived directory.thread/rollbackdrops the last N turns from the in-memory context and records a rollback marker in the thread’s persisted JSONL log.
Start or resume a thread
Start a fresh thread when you need a new Codex conversation.
{ "method": "thread/start", "id": 10, "params": {
"model": "gpt-5.1-codex",
"cwd": "/Users/me/project",
"approvalPolicy": "never",
"sandbox": "workspaceWrite"
} }
{ "id": 10, "result": {
"thread": {
"id": "thr_123",
"preview": "",
"modelProvider": "openai",
"createdAt": 1730910000
}
} }
{ "method": "thread/started", "params": { "thread": { "id": "thr_123" } } }
To continue a stored session, call thread/resume with the thread.id you recorded earlier. The response shape matches thread/start:
{ "method": "thread/resume", "id": 11, "params": { "threadId": "thr_123" } }
{ "id": 11, "result": { "thread": { "id": "thr_123" } } }
To branch from a stored session, call thread/fork with the thread.id. This creates a new thread id and emits a thread/started notification for it:
{ "method": "thread/fork", "id": 12, "params": { "threadId": "thr_123" } }
{ "id": 12, "result": { "thread": { "id": "thr_456" } } }
{ "method": "thread/started", "params": { "thread": { "id": "thr_456" } } }
List threads (with pagination & filters)
thread/list lets you render a history UI. Results default to newest-first by createdAt. Pass any combination of:
cursor- opaque string from a prior response; omit for the first page.limit- server defaults to a reasonable page size if unset.sortKey-created_at(default) orupdated_at.modelProviders- restrict results to specific providers; unset, null, or an empty array includes all providers.
Example:
{ "method": "thread/list", "id": 20, "params": {
"cursor": null,
"limit": 25,
"sortKey": "created_at"
} }
{ "id": 20, "result": {
"data": [
{ "id": "thr_a", "preview": "Create a TUI", "modelProvider": "openai", "createdAt": 1730831111, "updatedAt": 1730831111 },
{ "id": "thr_b", "preview": "Fix tests", "modelProvider": "openai", "createdAt": 1730750000, "updatedAt": 1730750000 }
],
"nextCursor": "opaque-token-or-null"
} }
When nextCursor is null, you have reached the final page.
List loaded threads
thread/loaded/list returns thread IDs currently loaded in memory.
{ "method": "thread/loaded/list", "id": 21 }
{ "id": 21, "result": { "data": ["thr_123", "thr_456"] } }
Archive a thread
Use thread/archive to move the persisted thread log (stored as a JSONL file on disk) into the archived sessions directory.
{ "method": "thread/archive", "id": 22, "params": { "threadId": "thr_b" } }
{ "id": 22, "result": {} }
Archived threads won’t appear in future calls to thread/list.
Turns
The input field accepts a list of items:
{ "type": "text", "text": "Explain this diff" }{ "type": "image", "url": "https://.../design.png" }{ "type": "localImage", "path": "/tmp/screenshot.png" }
You can override configuration settings per turn (model, effort, cwd, sandbox policy, summary). When specified, these settings become the defaults for later turns on the same thread. outputSchema applies only to the current turn. For sandboxPolicy.type = "externalSandbox", set networkAccess to restricted or enabled; otherwise use a boolean.
Start a turn
{ "method": "turn/start", "id": 30, "params": {
"threadId": "thr_123",
"input": [ { "type": "text", "text": "Run tests" } ],
"cwd": "/Users/me/project",
"approvalPolicy": "unlessTrusted",
"sandboxPolicy": {
"type": "workspaceWrite",
"writableRoots": ["/Users/me/project"],
"networkAccess": true
},
"model": "gpt-5.1-codex",
"effort": "medium",
"summary": "concise",
"outputSchema": {
"type": "object",
"properties": { "answer": { "type": "string" } },
"required": ["answer"],
"additionalProperties": false
}
} }
{ "id": 30, "result": { "turn": { "id": "turn_456", "status": "inProgress", "items": [], "error": null } } }
Start a turn (invoke a skill)
Invoke a skill explicitly by including $<skill-name> in the text input and adding a skill input item alongside it.
{ "method": "turn/start", "id": 33, "params": {
"threadId": "thr_123",
"input": [
{ "type": "text", "text": "$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage." },
{ "type": "skill", "name": "skill-creator", "path": "/Users/me/.codex/skills/skill-creator/SKILL.md" }
]
} }
{ "id": 33, "result": { "turn": { "id": "turn_457", "status": "inProgress", "items": [], "error": null } } }
Interrupt a turn
{ "method": "turn/interrupt", "id": 31, "params": { "threadId": "thr_123", "turnId": "turn_456" } }
{ "id": 31, "result": {} }
On success, the turn finishes with status: "interrupted".
Review
review/start runs the Codex reviewer for a thread and streams review items. Targets include:
uncommittedChangesbaseBranch(diff against a branch)commit(review a specific commit)custom(free-form instructions)
Use delivery: "inline" (default) to run the review on the existing thread, or delivery: "detached" to fork a new review thread.
Example request/response:
{ "method": "review/start", "id": 40, "params": {
"threadId": "thr_123",
"delivery": "inline",
"target": { "type": "commit", "sha": "1234567deadbeef", "title": "Polish tui colors" }
} }
{ "id": 40, "result": {
"turn": {
"id": "turn_900",
"status": "inProgress",
"items": [
{ "type": "userMessage", "id": "turn_900", "content": [ { "type": "text", "text": "Review commit 1234567: Polish tui colors" } ] }
],
"error": null
},
"reviewThreadId": "thr_123"
} }
For a detached review, use "delivery": "detached". The response is the same shape, but reviewThreadId will be the id of the new review thread (different from the original threadId). The server also emits a thread/started notification for that new thread before streaming the review turn.
Codex streams the usual turn/started notification followed by an item/started with an enteredReviewMode item:
{
"method": "item/started",
"params": {
"item": {
"type": "enteredReviewMode",
"id": "turn_900",
"review": "current changes"
}
}
}
When the reviewer finishes, the server emits item/started and item/completed containing an exitedReviewMode item with the final review text:
{
"method": "item/completed",
"params": {
"item": {
"type": "exitedReviewMode",
"id": "turn_900",
"review": "Looks solid overall..."
}
}
}
Use this notification to render the reviewer output in your client.
Command execution
command/exec runs a single command (argv array) under the server sandbox without creating a thread.
{ "method": "command/exec", "id": 50, "params": {
"command": ["ls", "-la"],
"cwd": "/Users/me/project",
"sandboxPolicy": { "type": "workspaceWrite" },
"timeoutMs": 10000
} }
{ "id": 50, "result": { "exitCode": 0, "stdout": "...", "stderr": "" } }
Use sandboxPolicy.type = "externalSandbox" if you already sandbox the server process and want Codex to skip its own sandbox enforcement. For external sandbox mode, set networkAccess to restricted (default) or enabled. For other sandbox policies, networkAccess is a boolean.
Notes:
- The server rejects empty
commandarrays. sandboxPolicyaccepts the same shape used byturn/start(for example,dangerFullAccess,readOnly,workspaceWrite,externalSandbox).- When omitted,
timeoutMsfalls back to the server default.
Events
Event notifications are the server-initiated stream for thread lifecycles, turn lifecycles, and the items within them. After you start or resume a thread, keep reading stdout for thread/started, turn/*, and item/* notifications.
Turn events
turn/started-{ turn }with the turn id, emptyitems, andstatus: "inProgress".turn/completed-{ turn }whereturn.statusiscompleted,interrupted, orfailed; failures carry{ error: { message, codexErrorInfo?, additionalDetails? } }.turn/diff/updated-{ threadId, turnId, diff }with the latest aggregated unified diff across every file change in the turn.turn/plan/updated-{ turnId, explanation?, plan }whenever the agent shares or changes its plan; eachplanentry is{ step, status }withstatusinpending,inProgress, orcompleted.thread/tokenUsage/updated- usage updates for the active thread.
turn/diff/updated and turn/plan/updated currently include empty items arrays even when item events stream. Use item/* notifications as the source of truth for turn items.
Items
ThreadItem is the tagged union carried in turn responses and item/* notifications. Common item types include:
userMessage-{id, content}wherecontentis a list of user inputs (text,image, orlocalImage).agentMessage-{id, text}containing the accumulated agent reply.reasoning-{id, summary, content}wheresummaryholds streamed reasoning summaries andcontentholds raw reasoning blocks.commandExecution-{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}.fileChange-{id, changes, status}describing proposed edits;changeslist{path, kind, diff}.mcpToolCall-{id, server, tool, status, arguments, result?, error?}.collabToolCall-{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}.webSearch-{id, query}for web search requests issued by the agent.imageView-{id, path}emitted when the agent invokes the image viewer tool.enteredReviewMode-{id, review}sent when the reviewer starts.exitedReviewMode-{id, review}emitted when the reviewer finishes.compacted-{threadId, turnId}when Codex compacts the conversation history.
All items emit two shared lifecycle events:
item/started- emits the fullitemwhen a new unit of work begins; theitem.idmatches theitemIdused by deltas.item/completed- sends the finalitemonce work finishes; treat this as the authoritative state.
Item deltas
item/agentMessage/delta- appends streamed text for the agent message.item/reasoning/summaryTextDelta- streams readable reasoning summaries;summaryIndexincrements when a new summary section opens.item/reasoning/summaryPartAdded- marks a boundary between reasoning summary sections.item/reasoning/textDelta- streams raw reasoning text (when supported by the model).item/commandExecution/outputDelta- streams stdout/stderr for a command; append deltas in order.item/fileChange/outputDelta- contains the tool call response of the underlyingapply_patchtool call.
Errors
If a turn fails, the server emits an error event with { error: { message, codexErrorInfo?, additionalDetails? } } and then finishes the turn with status: "failed". When an upstream HTTP status is available, it appears in codexErrorInfo.httpStatusCode.
Common codexErrorInfo values include:
ContextWindowExceededUsageLimitExceededHttpConnectionFailed(4xx/5xx upstream errors)ResponseStreamConnectionFailedResponseStreamDisconnectedResponseTooManyFailedAttemptsBadRequest,Unauthorized,SandboxError,InternalServerError,Other
When an upstream HTTP status is available, the server forwards it in httpStatusCode on the relevant codexErrorInfo variant.
Approvals
Depending on a user’s Codex settings, command execution and file changes may require approval. The app-server sends a server-initiated JSON-RPC request to the client, and the client responds with { "decision": "accept" | "decline" } (plus optional acceptSettings for command approvals).
- Requests include
threadIdandturnId- use them to scope UI state to the active conversation. - The server resumes or declines the work and ends the item with
item/completed.
Command execution approvals
Order of messages:
item/startedshows the pendingcommandExecutionitem withcommand,cwd, and other fields.item/commandExecution/requestApprovalincludesitemId,threadId,turnId, optionalreasonorrisk, plusparsedCmdfor display.- Client response accepts or declines (optionally setting
acceptSettings). item/completedreturns the finalcommandExecutionitem withstatus: completed | failed | declined.
File change approvals
Order of messages:
item/startedemits afileChangeitem with proposedchangesandstatus: "inProgress".item/fileChange/requestApprovalincludesitemId,threadId,turnId, and an optionalreason.- Client response accepts or declines.
item/completedreturns the finalfileChangeitem withstatus: completed | failed | declined.
Skills
Invoke a skill by including $<skill-name> in the user text input. Add a skill input item (recommended) so the server injects full skill instructions instead of relying on the model to resolve the name.
{
"method": "turn/start",
"id": 101,
"params": {
"threadId": "thread-1",
"input": [
{
"type": "text",
"text": "$skill-creator Add a new skill for triaging flaky CI."
},
{
"type": "skill",
"name": "skill-creator",
"path": "/Users/me/.codex/skills/skill-creator/SKILL.md"
}
]
}
}
If you omit the skill item, the model will still parse the $<skill-name> marker and try to locate the skill, which can add latency.
Example:
$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage.
Use skills/list to fetch the available skills (optionally scoped by cwds, with forceReload).
{ "method": "skills/list", "id": 25, "params": {
"cwds": ["/Users/me/project"],
"forceReload": false
} }
{ "id": 25, "result": {
"data": [{
"cwd": "/Users/me/project",
"skills": [
{ "name": "skill-creator", "description": "Create or update a Codex skill", "enabled": true }
],
"errors": []
}]
} }
To enable or disable a skill by path:
{
"method": "skills/config/write",
"id": 26,
"params": {
"path": "/Users/me/.codex/skills/skill-creator/SKILL.md",
"enabled": false
}
}
Auth endpoints
The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no id). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.
API overview
account/read- fetch current account info; optionally refresh tokens.account/login/start- begin login (apiKeyorchatgpt).account/login/completed(notify) - emitted when a login attempt finishes (success or error).account/login/cancel- cancel a pending ChatGPT login byloginId.account/logout- sign out; triggersaccount/updated.account/updated(notify) - emitted whenever auth mode changes (authMode:apikey,chatgpt, ornull).account/rateLimits/read- fetch ChatGPT rate limits.account/rateLimits/updated(notify) - emitted whenever a user’s ChatGPT rate limits change.mcpServer/oauthLogin/completed(notify) - emitted after amcpServer/oauth/loginflow finishes; payload includes{ name, success, error? }.
1) Check auth state
Request:
{ "method": "account/read", "id": 1, "params": { "refreshToken": false } }
Response examples:
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": false } }
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": true } }
{
"id": 1,
"result": { "account": { "type": "apiKey" }, "requiresOpenaiAuth": true }
}
{
"id": 1,
"result": {
"account": {
"type": "chatgpt",
"email": "user@example.com",
"planType": "pro"
},
"requiresOpenaiAuth": true
}
}
Field notes:
refreshToken(boolean): settrueto force a token refresh.requiresOpenaiAuthreflects the active provider; whenfalse, Codex can run without OpenAI credentials.
2) Log in with an API key
-
Send:
{ "method": "account/login/start", "id": 2, "params": { "type": "apiKey", "apiKey": "sk-..." } } -
Expect:
{ "id": 2, "result": { "type": "apiKey" } } -
Notifications:
{ "method": "account/login/completed", "params": { "loginId": null, "success": true, "error": null } }{ "method": "account/updated", "params": { "authMode": "apikey" } }
3) Log in with ChatGPT (browser flow)
-
Start:
{ "method": "account/login/start", "id": 3, "params": { "type": "chatgpt" } }{ "id": 3, "result": { "type": "chatgpt", "loginId": "<uuid>", "authUrl": "https://chatgpt.com/...&redirect_uri=http%3A%2F%2Flocalhost%3A<port>%2Fauth%2Fcallback" } } -
Open
authUrlin a browser; the app-server hosts the local callback. -
Wait for notifications:
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": true, "error": null } }{ "method": "account/updated", "params": { "authMode": "chatgpt" } }
4) Cancel a ChatGPT login
{ "method": "account/login/cancel", "id": 4, "params": { "loginId": "<uuid>" } }
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": false, "error": "..." } }
5) Logout
{ "method": "account/logout", "id": 5 }
{ "id": 5, "result": {} }
{ "method": "account/updated", "params": { "authMode": null } }
6) Rate limits (ChatGPT)
{ "method": "account/rateLimits/read", "id": 6 }
{ "id": 6, "result": { "rateLimits": { "primary": { "usedPercent": 25, "windowDurationMins": 15, "resetsAt": 1730947200 }, "secondary": null } } }
{ "method": "account/rateLimits/updated", "params": { "rateLimits": { } } }
Field notes:
usedPercentis current usage within the OpenAI quota window.windowDurationMinsis the quota window length.resetsAtis a Unix timestamp (seconds) for the next reset.