Primary navigation

Developer changelog

Latest updates across OpenAI Developers, including new docs, features, and site improvements.

Subscribe to the RSS feed for updates.

  • Codex

    Codex app v260205

    New features

    • Support for GPT-5.3-Codex.
    • Added mid-turn steering. Submit a message while Codex is working to direct its behavior.
    • Attach or drop any file type.

    Bug fixes

    • Fix flickering of the app.
  • Codex

    Introducing GPT-5.3-Codex

    Today we’re releasing GPT-5.3-Codex, the most capable agentic coding model to date for complex, real-world software engineering.

    GPT-5.3-Codex combines the frontier coding performance of GPT-5.2-Codex with stronger reasoning and professional knowledge capabilities, and runs 25% faster for Codex users. It’s also better at collaboration while the agent is working—delivering more frequent progress updates and responding to steering in real time.

    GPT-5.3-Codex is available with paid ChatGPT plans everywhere you can use Codex: the Codex app, the CLI, the IDE extension, and Codex Cloud on the web. API access for the model will come soon.

    To switch to GPT-5.3-Codex:

    • In the CLI, start a new thread with:
      codex --model gpt-5.3-codex
      Or use /model during a session.
    • In the IDE extension, make sure you are signed in with ChatGPT, then choose GPT-5.3-Codex from the model selector in the composer.
    • In the Codex app, make sure you are signed in with ChatGPT, then choose GPT-5.3-Codex from the model selector in the composer.
    • If you don’t see GPT-5.3-Codex, update the CLI, IDE extension, or Codex app to the latest version.

    For API-key workflows, continue using gpt-5.2-codex while API support rolls out.

  • Codex

    Codex CLI 0.98.0

    $ npm install -g @openai/codex@0.98.0
    View details

    New Features

    • Introducing GPT-5.3-Codex. Learn More
    • Steer mode is now stable and enabled by default, so Enter sends immediately during running tasks while Tab explicitly queues follow-up input. (#10690)

    Bug Fixes

    • Fixed resumeThread() argument ordering in the TypeScript SDK so resuming with local images no longer starts an unintended new session. (#10709)
    • Fixed model-instruction handling when changing models mid-conversation or resuming with a different model, ensuring the correct developer instructions are applied. (#10651, #10719)
    • Fixed a remote compaction mismatch where token pre-estimation and compact payload generation could use different base instructions, improving trim accuracy and avoiding context overflows. (#10692)
    • Cloud requirements now reload immediately after login instead of requiring a later refresh path to take effect. (#10725)

    Chores

    • Restored the default assistant personality to Pragmatic across config and related tests/UI snapshots. (#10705)
    • Unified collaboration mode naming and metadata across prompts, tools, protocol types, and TUI labels for more consistent mode behavior and messaging. (#10666)

    Changelog

    Full Changelog: rust-v0.97.0...rust-v0.98.0

    Full release on Github

  • Codex

    Codex CLI 0.97.0

    $ npm install -g @openai/codex@0.97.0
    View details

    New Features

    • Added a session-scoped “Allow and remember” option for MCP/App tool approvals, so repeated calls to the same tool can be auto-approved during the session. (#10584)
    • Added live skill update detection, so skill file changes are picked up without restarting. (#10478)
    • Added support for mixed text and image content in dynamic tool outputs for app-server integrations. (#10567)
    • Added a new /debug-config slash command in the TUI to inspect effective configuration. (#10642)
    • Introduced initial memory plumbing (API client + local persistence) to support thread memory summaries. (#10629, #10634)
    • Added configurable log_dir so logs can be redirected (including via -c overrides) more easily. (#10678)

    Bug Fixes

    • Fixed jitter in the TUI apps/connectors picker by stabilizing description-column rendering. (#10593)
    • Restored and stabilized the TUI “working” status indicator/shimmer during preamble and early exec flows. (#10700, #10701)
    • Improved cloud requirements reliability with higher timeouts, retries, and corrected precedence over MDM settings. (#10631, #10633, #10659)
    • Persisted pending-input user events more consistently for mid-turn injected input handling. (#10656)

    Documentation

    • Documented how to opt in to the experimental app-server API. (#10667)
    • Updated docs/schema coverage for new log_dir configuration behavior. (#10678)

    Chores

    • Added a gated Bubblewrap (bwrap) Linux sandbox path to improve filesystem isolation options. (#9938)
    • Refactored model client lifecycle to be session-scoped and reduced implicit client state. (#10595, #10664)
    • Added caching for MCP actions from apps to reduce repeated load latency for users with many installed apps. (#10662)
    • Added a none personality option in protocol/config surfaces. (#10688)

    Changelog

    Full Changelog: rust-v0.96.0...rust-v0.97.0

    Full release on Github

  • Codex

    Codex app v260204

    New features

    • Added Zed and Textmate as options to open files and folders.
    • Added PDF preview in the review panel.

    Bug fixes

    • Performance improvements.
  • Codex

    Codex CLI 0.96.0

    $ npm install -g @openai/codex@0.96.0
    View details

    New Features

    • Added thread/compact to the v2 app-server API as an async trigger RPC, so clients can start compaction immediately and track completion separately. (#10445)
    • Added websocket-side rate limit signaling via a new codex.rate_limits event, with websocket parity for ETag/reasoning metadata handling. (#10324)
    • Enabled unified_exec on all non-Windows platforms. (#10641)
    • Constrained requirement values now include source provenance, enabling source-aware config debugging in UI flows like /debug-config. (#10568)

    Bug Fixes

    • Fixed Esc handling in the TUI request_user_input overlay: when notes are open, Esc now exits notes mode instead of interrupting the session. (#10569)
    • Thread listing now queries the state DB first (including archived threads) and falls back to filesystem traversal only when needed, improving listing correctness and resilience. (#10544)
    • Fixed thread path lookup to require that the resolved file actually exists, preventing invalid thread-id resolutions. (#10618)
    • Dynamic tool injection now runs in a single transaction to avoid partial state updates. (#10614)
    • Refined request_rule guidance used in approval-policy prompting to correct rule behavior. (#10379, #10598)

    Documentation

    • Updated app-server docs for thread/compact to clarify its asynchronous behavior and thread-busy lifecycle. (#10445)
    • Updated TUI docs to match the mode-specific Esc behavior in request_user_input. (#10569)

    Chores

    • Migrated state DB helpers to a versioned SQLite filename scheme and cleaned up legacy state files during runtime initialization. (#10623)
    • Expanded runtime telemetry with websocket timing metrics and simplified internal metadata flow in core client plumbing. (#10577, #10589)

    Changelog

    Full Changelog: rust-v0.95.0...rust-v0.96.0

    Full release on Github

  • Codex

    Codex CLI 0.95.0

    $ npm install -g @openai/codex@0.95.0
    View details

    New Features

    • Added codex app <path> on macOS to launch Codex Desktop from the CLI, with automatic DMG download if it is missing. (#10418)
    • Added personal skill loading from ~/.agents/skills (with ~/.codex/skills compatibility), plus app-server APIs/events to list and download public remote skills. (#10437, #10448)
    • /plan now accepts inline prompt arguments and pasted images, and slash-command editing/highlighting in the TUI is more polished. (#10269)
    • Shell-related tools can now run in parallel, improving multi-command execution throughput. (#10505)
    • Shell executions now receive CODEX_THREAD_ID, so scripts and skills can detect the active thread/session. (#10096)
    • Added vendored Bubblewrap + FFI wiring in the Linux sandbox as groundwork for upcoming runtime integration. (#10413)

    Bug Fixes

    • Hardened Git command safety so destructive or write-capable invocations no longer bypass approval checks. (#10258)
    • Improved resume/thread browsing reliability by correctly showing saved thread names and fixing thread listing behavior. (#10340, #10383)
    • Fixed first-run trust-mode handling so sandbox mode is reported consistently, and made $PWD/.agents read-only like $PWD/.codex. (#10415, #10524)
    • Fixed codex exec hanging after interrupt in websocket/streaming flows; interrupted turns now shut down cleanly. (#10519)
    • Fixed review-mode approval event wiring so requestApproval IDs align with the corresponding command execution items. (#10416)
    • Improved 401 error diagnostics by including server message/body details plus cf-ray and requestId. (#10508)

    Documentation

    • Expanded TUI chat composer docs to cover slash-command arguments and attachment handling in plan/review flows. (#10269)
    • Refreshed issue templates and labeler prompts to better separate CLI/app bug reporting and feature requests. (#10411, #10453, #10548, #10552)

    Chores

    • Completed migration off the deprecated mcp-types crate to rmcp-based protocol types/adapters, then removed the legacy crate. (#10356, #10349, #10357)
    • Updated the bytes dependency for a security advisory and cleaned up resolved advisory configuration. (#10525)

    Changelog

    Full Changelog: rust-v0.94.0...rust-v0.95.0

    Full release on Github

  • Resources

    Added ChatGPT Apps lessons guest post

    Fixes & improvements

    • Published the guest post “15 lessons learned building ChatGPT Apps” by Nikolay Rodionov.

  • Codex

    Introducing the Codex app

    Codex app

    The Codex app for macOS is a desktop interface for running agent threads in parallel and collaborating with agents on long-running tasks. It includes a project sidebar, thread list, and review pane for tracking work across projects.

    Key features:

    For a limited time, ChatGPT Free and Go include Codex, and Plus, Pro, Business, Enterprise, and Edu plans get double rate limits. Those higher limits apply in the app, the CLI, your IDE, and the cloud.

    Learn more in the Introducing the Codex app blog post.

    Check out the Codex app documentation for more.

  • Codex

    Codex CLI 0.94.0

    $ npm install -g @openai/codex@0.94.0
    View details

    New Features

    • Plan mode is now enabled by default with updated interaction guidance in the plan prompt. (#10313, #10308, #10329)
    • Personality configuration is now stable: default is friendly, the config key is personality, and existing settings migrate forward. (#10305, #10314, #10310, #10307)
    • Skills can be loaded from .agents/skills, with clearer relative-path instructions and nested-folder markers supported. (#10317, #10282, #10350)
    • Console output now includes runtime metrics for easier diagnostics. (#10278)

    Bug Fixes

    • Unarchiving a thread updates its timestamp so sidebar ordering refreshes. (#10280)
    • Conversation rules output is capped and prefix rules are deduped to avoid repeated rules. (#10351, #10309)
    • Override turn context no longer appends extra items. (#10354)

    Documentation

    • Fixed a broken image link in the npm README. (#10303)

    Changelog

    Full Changelog: rust-v0.93.0...rust-v0.94.0

    Full release on Github

  • Codex

    Codex CLI 0.93.0

    $ npm install -g @openai/codex@0.93.0
    View details

    New Features

    • Added an optional SOCKS5 proxy listener with policy enforcement and config gating. (#9803)
    • Plan mode now streams proposed plans into a dedicated TUI view, plus a feature-gated /plan shortcut for quick mode switching. (#9786, #10103)
    • Added /apps to browse connectors in TUI and $ insertion for app prompts. (#9728)
    • App-server can now run in external auth mode, accepting ChatGPT auth tokens from a host app and requesting refreshes when needed. (#10012)
    • Smart approvals are now enabled by default, with explicit approval prompts for MCP tool calls. (#10286, #10200)
    • Introduced a SQLite-backed log database with an improved logs client, thread-id filtering, retention, and heuristic coloring. (#10086, #10087, #10150, #10151, #10229, #10228)

    Bug Fixes

    • MCP tool image outputs render reliably even when image blocks aren’t first or are partially malformed. (#9815)
    • Input history recall now restores local image attachments and rich text elements. (#9628)
    • File search now tracks session CWD changes and supports multi-root traversal with better performance. (#9279, #9939, #10240)
    • Resuming a thread no longer updates updated_at until the first turn actually starts. (#9950)
    • Shell snapshots no longer inherit stdin, avoiding hangs from startup scripts. (#9735)
    • Connections fall back to HTTP when WebSocket proxies fail. (#10139)

    Documentation

    • Documented app-server AuthMode usage and behavior. (#10191)

    Chores

    • Upgraded the Rust toolchain to 1.93. (#10080)
    • Updated pnpm versions used in the repo. (#9992, #10161)
    • Bazel build and runfiles handling improvements, including remote cache compression. (#10079, #10098, #10102, #10104)

    Changelog

    Full Changelog: rust-v0.92.0...rust-v0.93.0

    Full release on Github

  • Codex

    Web search is now enabled by default

    Codex now enables web search for local tasks in the Codex CLI and IDE Extension. By default, Codex uses a web search cache, which is an OpenAI-maintained index of web results. Cached mode returns pre-indexed results instead of fetching live pages, while live mode fetches the most recent data from the web. If you are using --yolo or another full access sandbox setting, web search defaults to live results. To disable this behavior or switch modes, use the web_search configuration option:

    • web_search = "cached" (default; serves results from the web search cache)
    • web_search = "live" (fetches the most recent data from the web; same as --search)
    • web_search = "disabled" to remove the tool

    To learn more, check out the configuration documentation.

  • Codex

    Codex CLI 0.92.0

    $ npm install -g @openai/codex@0.92.0
    View details

    New Features

    • API v2 threads can now inject dynamic tools at startup and route their calls/responses end-to-end through the server and core tool pipeline. (#9539)
    • Added filtering on the thread list in the app server to make large thread sets easier to browse. (#9897)
    • Introduced a thread/unarchive RPC to restore archived rollouts back into active sessions. (#9843)
    • MCP servers can now define OAuth scopes in config.toml, reducing the need to pass --scopes on each login. (#9647)
    • Multi-agent collaboration is more capable and safer, with an explorer role, better collab event mapping, and max-depth guardrails. (#9817, #9818, #9918, #9899)
    • Cached web_search is now the default client behavior. (#9974)

    Bug Fixes

    • Fixed a TUI deadlock/freeze under high streaming throughput by avoiding blocking sends on the main Tokio thread. (#9951)
    • The web_search tool now handles and displays all action types, and shows in-progress activity instead of appearing stuck. (#9960)
    • Reduced high CPU usage in collaboration flows by eliminating busy-waiting on subagents. (#9776)
    • Fixed codex resume --last --json so prompts parse correctly without conflicting argument errors. (#9475)
    • Windows sandbox logging now handles UTF-8 safely, preventing failures when truncating multibyte content. (#8647)
    • request_user_input is now rejected outside Plan/Pair modes to prevent invalid tool calls. (#9955)

    Documentation

    • Updated the contribution guidelines for clearer onboarding and workflow expectations. (#9933)
    • Refreshed protocol/MCP docs to reflect thread/unarchive and the updated request_user_input question shape. (#9843, #9890)

    Chores

    • Self-update via Homebrew now uses an explicit cask upgrade command to avoid warnings and ambiguity. (#9823)
    • Release packaging now consistently writes the bundle zip to dist/. (#9934)
    • Updated key dependencies in the Rust workspace (including axum, tracing, globset, and tokio-test). (#9880, #9882, #9883, #9884)
    • Aligned feature stage naming with public maturity stages and added clearer warnings for underdevelopment features. (#9929, #9954)

    Changelog

    Full Changelog: rust-v0.91.0...rust-v0.92.0

    Full release on Github

  • Codex

    Codex CLI 0.91.0

  • Codex

    Codex CLI 0.90.0

    $ npm install -g @openai/codex@0.90.0
    View details

    New Features

    • Added a network sandbox proxy with policy enforcement to better control outbound network access. (#8442)
    • Introduced the first phase of connectors support via the app server and MCP integration, including new config/docs updates. (#9667)
    • Shipped collaboration mode as beta in the TUI, with a clearer plan → execute handoff and simplified mode selection (Coding vs Plan). (#9690, #9712, #9802, #9834)
    • Added ephemeral threads and improved collaboration tool provenance metadata for spawned threads. (#9765, #9769)
    • WebSocket connections now support proxy configuration. (#9719)
    • More strict limitation on multi-agents

    Bug Fixes

    • Fixed exec policy parsing for multiline quoted arguments. (#9565)
    • --yolo now skips the git repository check instead of failing outside a repo. (#9590)
    • Improved resume reliability by handling out-of-order events and prompting for the working directory when it differs. (#9512, #9731)
    • Backspace no longer deletes a text element when the cursor is at the element’s left edge. (#9630)
    • Config loading errors are clearer and more actionable across surfaces. (#9746)
    • Default model selection now respects filtered presets to avoid invalid defaults. (#9782)

    Documentation

    • Corrected a typo in the experimental collaboration prompt template. (#9716)
    • Added documentation for the new connectors configuration surface. (#9667)

    Chores

    • Refreshed the bundled model catalog/presets. (#9726)
    • Updated GitHub Actions for Node 24 compatibility. (#9722)

    Changelog

    Full Changelog: rust-v0.89.0...rust-v0.90.0

    Full release on Github

  • Codex

    Team Config for shared configuration

    Team Config groups the files teams use to standardize Codex across repositories and machines. Use it to share:

    • config.toml defaults
    • rules/ for command controls outside the sandbox
    • skills/ for reusable workflows

    Codex loads these layers from .codex/ folders in the current working directory, parent folders, and the repo root, plus user (~/.codex/) and system (/etc/codex/) locations. Higher-precedence locations override lower-precedence ones.

    Admins can still enforce constraints with requirements.toml, which overrides defaults regardless of location.

    Learn more in Team Config.

  • Codex

    Custom prompts deprecated

    Custom prompts are now deprecated. Use skills for reusable instructions and workflows instead.

  • Resources

    Expanded llms.txt coverage

    Fixes & improvements

    • Expanded llms.txt coverage to include resources, blog posts, cookbook entries, and the developer changelog, with full exports where available.

  • Codex

    Codex CLI 0.89.0

    $ npm install -g @openai/codex@0.89.0
    View details

    New Features

    • Added a /permissions command with a shorter approval set while keeping /approvals for compatibility. (#9561)
    • Added a /skill UI to enable or disable individual skills. (#9627)
    • Improved slash-command selection by prioritizing exact and prefix matches over fuzzy matches. (#9629)
    • App server now supports thread/read and can filter archived threads in thread/list. (#9569, #9571)
    • App server clients now support layered config.toml resolution and config/read can compute effective config from a given cwd. (#9510)
    • Release artifacts now include a stable URL for the published config schema. (#9572)

    Bug Fixes

    • Prevented tilde expansion from escaping HOME on paths like ~//.... (#9621)
    • TUI turn timing now resets between assistant messages so elapsed time reflects the latest response. (#9599)

    Documentation

    • Updated MCP subcommand docs to match current CLI behavior. (#9622)
    • Refreshed the skills/list protocol README example to match the latest response shape. (#9623)

    Chores

    • Removed the TUI2 experiment and its related config/docs, keeping Codex on the terminal-native UI. (#9640)

    Changelog

    Full Changelog: rust-v0.88.0...rust-v0.89.0

    Full release on Github

  • Apps SDK

    Company knowledge compatibility guidance

  • Codex

    Codex CLI 0.88.0

    $ npm install -g @openai/codex@0.88.0
    View details

    New Features

    • Added device-code auth as a standalone fallback in headless environments. (#9333)

    Bug Fixes

    • Load configs from trusted folders only and fix symlinked config.toml resolution. (#9533, #9445)
    • Fixed Azure endpoint invalid input errors. (#9387)
    • Resolved a memory leak in core runtime. (#9543)
    • Prevented interrupted turns from repeating. (#9043)
    • Fixed WSL TUI image paste regression. (#9473)

    Documentation

    • Updated MCP documentation link to the current destination. (#9490)
    • Corrected a “Multi-agents” naming typo in docs. (#9542)
    • Added developer instructions for collaboration modes. (#9424)

    Chores

    Changelog

    Full Changelog: rust-v0.87.0...rust-v0.88.0

    Full release on Github

  • Codex

    Codex CLI 0.87.0

    $ npm install -g @openai/codex@0.87.0
    View details

    New Features

    • User message metadata (text elements and byte ranges) now round-trips through protocol/app-server/core so UI annotations can survive history rebuilds. (#9331)
    • Collaboration wait calls can block on multiple IDs in one request, simplifying multi-thread coordination. (#9294)
    • User shell commands now run under the user snapshot so aliases and shell config are honored. (#9357)
    • The TUI now surfaces approval requests from spawned/unsubscribed threads. (#9232)

    Bug Fixes

    • Token estimation during compaction is now accurate, improving budgeting during long sessions. (#9337)
    • MCP CallToolResult now includes threadId in both content and structuredContent, and returns a defined output schema for compatibility. (#9338)
    • The TUI “Worked for” separator only appears after actual work has occurred. (#8958)
    • Piped non-PTY commands no longer hang waiting on stdin. (#9369)

    Documentation

    • MCP interface docs updated to reflect structured output schema and threadId behavior. (#9338)

    Chores

    • Windows builds enable the PowerShell UTF-8 feature by default. (#9195)

    Changelog

    Full Changelog: rust-v0.86.0...rust-v0.87.0

    Full release on Github

  • Codex

    Codex CLI 0.86.0

    $ npm install -g @openai/codex@0.86.0
    View details

    New Features

    • Skill metadata can now be defined in SKILL.toml (names, descriptions, icons, brand color, default prompt) and surfaced in the app server and TUI (#9125)
    • Clients can explicitly disable web search and signal eligibility via a header to align with server-side rollout controls (#9249)

    Bug Fixes

    • Accepting an MCP elicitation now sends an empty JSON payload instead of null to satisfy servers expecting content (#9196)
    • Input prompt placeholder styling is back to non-italic to avoid terminal rendering issues (#9307)
    • Empty paste events no longer trigger clipboard image reads (#9318)
    • Unified exec cleans up background processes to prevent late End events after listeners stop (#9304)

    Chores

    • Refresh the orchestrator prompt to improve internal routing behavior (#9301)
    • Reduce noisy needs_follow_up error logging (#9272)

    Changelog

    Full Changelog: rust-v0.85.0...rust-v0.86.0

    Full release on Github

  • Apps SDK

    Session metadata for tool calls & requestModal template switching

    Fixes & improvements

    • Tool calls now include _meta["openai/session"], an anonymized conversation id you can use to correlate requests within a ChatGPT session.

    • window.openai.requestModal({ template }) now supports opening a different registered UI template by passing the template URI from registerResource.

  • Codex

    Codex CLI 0.85.0

    $ npm install -g @openai/codex@0.85.0
    View details

    New Features

    • App-server v2 now emits collaboration tool calls as item events in the turn stream, so clients can render agent coordination in real time. (#9213)
    • Collaboration tools gained richer agent control: spawn_agent accepts an agent role preset, and send_input can optionally interrupt a running agent before delivering the message. (#9275, #9276)
    • /models metadata now includes upgrade migration markdown so clients can display richer guidance when suggesting model upgrades. (#9219)

    Bug Fixes

    • [revert] Linux sandboxing now falls back to Landlock-only restrictions when user namespaces are unavailable, and sets no_new_privs before applying sandbox rules. (#9250)
    • codex resume --last now respects the current working directory, with --all as an explicit override. (#9245)
    • Stdin prompt decoding now handles BOMs/UTF-16 and provides clearer errors for invalid encodings. (#9151)

    Changelog

    Full Changelog: rust-v0.84.0...rust-v0.85.0

    Full release on Github

  • Codex

    Codex CLI 0.84.0

    $ npm install -g @openai/codex@0.84.0
    View details

    New Features

    • Extend the Rust protocol/types to include additional metadata on text elements, enabling richer client rendering and schema evolution (#9235)

    Chores

    • Reduce flaky Rust release pipelines (notably on Windows) by increasing the release build job timeout (#9242)

    Changelog

    Full Changelog: rust-v0.83.0...rust-v0.84.0

    Full release on Github

  • Codex

    GPT-5.2-Codex API availability

    GPT-5.2-Codex is now available in the API and for users who sign into Codex with the API.

    To learn more about using GPT-5.2-Codex check out our API documentation.

  • Codex

    Codex CLI 0.81.0

    $ npm install -g @openai/codex@0.81.0
    View details

    New Features

    • Default API model moved to gpt-5.2-codex. (#9188)
    • The codex tool in codex mcp-server now includes the threadId in the response so it can be used with the codex-reply tool, fixing #3712. The documentation has been updated at https://developers.openai.com/codex/guides/agents-sdk/. (#9192)
    • Headless runs now switch to device-code login automatically so sign-in works without a browser. (#8756)
    • Linux sandbox can mount paths read-only to better protect files from writes. (#9112)
    • Support partial tool calls rendering in tui

    Bug Fixes

    • Alternate-screen handling now avoids breaking Zellij scrollback and adds a config/flag to control it. (#8555)
    • Windows correctly prompts before unsafe commands when running with a read-only sandbox policy. (#9117)
    • Config.toml and rules parsing errors are reported to app-server clients/TUI instead of failing silently. (#9182, #9011)
    • Worked around a macOS system-configuration crash in proxy discovery. (#8954)
    • Invalid user image uploads now surface an error instead of being silently replaced. (#9146)

    Documentation

    • Published a generated JSON Schema for config.toml in docs/ to validate configs. (#8956)
    • Documented the TUI paste-burst state machine for terminals without reliable bracketed paste. (#9020)

    Chores

    • Added Bazel build support plus a just bazel-codex helper for contributors. (#8875, #9177)

    Changelog

    Full Changelog: rust-v0.80.0...rust-v0.81.0

    Full release on Github

  • Codex

    Agent skills in Codex

    Codex now supports agent skills: reusable bundles of instructions (plus optional scripts and resources) that help Codex reliably complete specific tasks.

    Skills are available in both the Codex CLI and IDE extensions.

    You can invoke a skill explicitly by typing $skill-name (for example, $skill-installer or the experimental $create-plan skill after installing it), or let Codex select a skill automatically based on your prompt.

    Learn more in the skills documentation.

    Folder-based standard (agentskills.io)

    Following the open agent skills specification, a skill is a folder with a required SKILL.md and optional supporting files:

    my-skill/
      SKILL.md       # Required: instructions + metadata
      scripts/       # Optional: executable code
      references/    # Optional: documentation
      assets/        # Optional: templates, resources

    Install skills per-user or per-repo

    You can install skills for just yourself in ~/.codex/skills, or for everyone on a project by checking them into .codex/skills in the repository.

    Codex also ships with a few built-in system skills to get started, including $skill-creator and $skill-installer. The $create-plan skill is experimental and needs to be installed (for example: $skill-installer install the create-plan skill from the .experimental folder).

    Curated skills directory

    Codex ships with a small curated set of skills inspired by popular workflows at OpenAI. Install them with $skill-installer, and expect more over time.

  • Codex

    Introducing GPT-5.2-Codex

    Today we are releasing GPT-5.2-Codex, the most advanced agentic coding model yet for complex, real-world software engineering.

    GPT-5.2-Codex is a version of GPT-5.2 further optimized for agentic coding in Codex, including improvements on long-horizon work through context compaction, stronger performance on large code changes like refactors and migrations, improved performance in Windows environments, and significantly stronger cybersecurity capabilities.

    Starting today, the CLI and IDE Extension will default to gpt-5.2-codex for users who are signed in with ChatGPT. API access for the model will come soon.

    If you have a model specified in your config.toml configuration file, you can instead try out gpt-5.2-codex for a new Codex CLI session using:

    codex --model gpt-5.2-codex

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.2-Codex from the dropdown menu.

    If you want to switch for all sessions, you can change your default model to gpt-5.2-codex by updating your config.toml configuration file:

    model = "gpt-5.2-codex”
  • Codex

    Introducing Codex for Linear

    Assign or mention @Codex in an issue to kick-off a Codex cloud task. As Codex works, it posts updates back to Linear, providing a link to the completed task so you can review, open a PR, or keep working.

    Screenshot of a successful Codex task started in Linear

    To learn more about how to connect Codex to Linear both locally through MCP and through the new integration, check out the Codex for Linear documentation.

  • Codex

    Usage and credits fixes

    Minor updates to address a few issues with Codex usage and credits:

    • Adjusted all usage dashboards to show “limits remaining” for consistency. The CLI previously displayed “limits used.”
    • Fixed an issue preventing users from buying credits if their ChatGPT subscription was purchased via iOS or Google Play.
    • Fixed an issue where the CLI could display stale usage information; it now refreshes without needing to send a message first.
    • Optimized the backend to help smooth out usage throughout the day, irrespective of overall Codex load or how traffic is routed. Before, users could get unlucky and hit a few cache misses in a row, leading to much less usage.
  • Codex

    Introducing GPT-5.1-Codex-Max

    Today we are releasing GPT-5.1-Codex-Max, our new frontier agentic coding model.

    GPT‑5.1-Codex-Max is built on an update to our foundational reasoning model, which is trained on agentic tasks across software engineering, math, research, and more. GPT‑5.1-Codex-Max is faster, more intelligent, and more token-efficient at every stage of the development cycle–and a new step towards becoming a reliable coding partner.

    Starting today, the CLI and IDE Extension will default to gpt-5.1-codex-max for users that are signed in with ChatGPT. API access for the model will come soon.

    For non-latency-sensitive tasks, we’ve also added a new Extra High (xhigh) reasoning effort, which lets the model think for an even longer period of time for a better answer. We still recommend medium as your daily driver for most tasks.

    If you have a model specified in your config.toml configuration file, you can instead try out gpt-5.1-codex-max for a new Codex CLI session using:

    codex --model gpt-5.1-codex-max

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.1-Codex from the dropdown menu.

    If you want to switch for all sessions, you can change your default model to gpt-5.1-codex-max by updating your config.toml configuration file:

    model = "gpt-5.1-codex-max”
  • Codex

    Introducing GPT-5.1-Codex and GPT-5.1-Codex-Mini

    Along with the GPT-5.1 launch in the API, we are introducing new gpt-5.1-codex-mini and gpt-5.1-codex model options in Codex, a version of GPT-5.1 optimized for long-running, agentic coding tasks and use in coding agent harnesses in Codex or Codex-like harnesses.

    Starting today, the CLI and IDE Extension will default to gpt-5.1-codex on macOS and Linux and gpt-5.1 on Windows.

    If you have a model specified in your config.toml configuration file, you can instead try out gpt-5.1-codex for a new Codex CLI session using:

    codex --model gpt-5.1-codex

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.1-Codex from the dropdown menu.

    If you want to switch for all sessions, you can change your default model to gpt-5.1-codex by updating your config.toml configuration file:

    model = "gpt-5.1-codex”
  • Codex

    Introducing GPT-5-Codex-Mini

    Today we are introducing a new gpt-5-codex-mini model option to Codex CLI and the IDE Extension. The model is a smaller, more cost-effective, but less capable version of gpt-5-codex that provides approximately 4x more usage as part of your ChatGPT subscription.

    Starting today, the CLI and IDE Extension will automatically suggest switching to gpt-5-codex-mini when you reach 90% of your 5-hour usage limit, to help you work longer without interruptions.

    You can try the model for a new Codex CLI session using:

    codex --model gpt-5-codex-mini

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5-Codex-Mini from the dropdown menu.

    Alternatively, you can change your default model to gpt-5-codex-mini by updating your config.toml configuration file:

    model = "gpt-5-codex-mini”
  • Codex

    GPT-5-Codex model update

    We’ve shipped a minor update to GPT-5-Codex:

    • More reliable file edits with apply_patch.
    • Fewer destructive actions such as git reset.
    • More collaborative behavior when encountering user edits in files.
    • 3% more efficient in time and usage.
  • Resources Apps SDK

    Resources updates

  • Codex

    Credits on ChatGPT Pro and Plus

    Codex users on ChatGPT Plus and Pro can now use on-demand credits for more Codex usage beyond what’s included in your plan. Learn more.

  • Codex

    Tag @Codex on GitHub Issues and PRs

    You can now tag @codex on a teammate’s pull request to ask clarifying questions, request a follow-up, or ask Codex to make changes. GitHub Issues now also support @codex mentions, so you can kick off tasks from any issue, without leaving your workflow.

    Codex responding to a GitHub pull request and issue after an @Codex mention.

  • Codex

    Codex is now GA

    Codex is now generally available with 3 new features — @Codex in Slack, Codex SDK, and new admin tools.

    @Codex in Slack

    You can now questions and assign tasks to Codex directly from Slack. See the Slack guide to get started.

    Codex SDK

    Integrate the same agent that powers the Codex CLI inside your own tools and workflows with the Codex SDK in Typescript. With the new Codex GitHub Action, you can easily add Codex to CI/CD workflows. See the Codex SDK guide to get started.

    import { Codex } from "@openai/codex-sdk";
    
    const agent = new Codex();
    const thread = await agent.startThread();
    
    const result = await thread.run("Explore this repo");
    console.log(result);
    
    const result2 = await thread.run("Propose changes");
    console.log(result2);

    New admin controls and analytics

    ChatGPT workspace admins can now edit or delete Codex Cloud environments. With managed config files, they can set safe defaults for CLI and IDE usage and monitor how Codex uses commands locally. New analytics dashboards help you track Codex usage and code review feedback. Learn more in the enterprise admin guide.

    Availability and pricing updates

    The Slack integration and Codex SDK are available to developers on ChatGPT Plus, Pro, Business, Edu, and Enterprise plans starting today, while the new admin features will be available to Business, Edu, and Enterprise. Beginning October 20, Codex Cloud tasks will count toward your Codex usage. Review the Codex pricing guide for plan-specific details.

  • Codex

    GPT-5-Codex in the API

    GPT-5-Codex is now available in the Responses API, and you can also use it with your API Key in the Codex CLI. We plan on regularly updating this model snapshot. It is available at the same price as GPT-5. You can learn more about pricing and rate limits for this model on our model page.

  • Codex

    Introducing GPT-5-Codex

    New model: GPT-5-Codex

    codex-switch-model

    GPT-5-Codex is a version of GPT-5 further optimized for agentic coding in Codex. It’s available in the IDE extension and CLI when you sign in with your ChatGPT account. It also powers the cloud agent and Code Review in GitHub.

    To learn more about GPT-5-Codex and how it performs compared to GPT-5 on software engineering tasks, see our announcement blog post.

    Image outputs

    codex-image-outputs

    When working in the cloud on front-end engineering tasks, GPT-5-Codex can now display screenshots of the UI in Codex web for you to review. With image output, you can iterate on the design without needing to check out the branch locally.

    New in Codex CLI

    • You can now resume sessions where you left off with codex resume.
    • Context compaction automatically summarizes the session as it approaches the context window limit.

    Learn more in the latest release notes

  • Codex

    Late August update

    IDE extension (Compatible with VS Code, Cursor, Windsurf)

    Codex now runs in your IDE with an interactive UI for fast local iteration. Easily switch between modes and reasoning efforts.

    Sign in with ChatGPT (IDE & CLI)

    One-click authentication that removes API keys and uses ChatGPT Enterprise credits.

    Move work between local ↔ cloud

    Hand off tasks to Codex web from the IDE with the ability to apply changes locally so you can delegate jobs without leaving your editor.

    Code Reviews

    Codex goes beyond static analysis. It checks a PR against its intent, reasons across the codebase and dependencies, and can run code to validate the behavior of changes.

  • Codex

    Mid August update

    Image inputs

    You can now attach images to your prompts in Codex web. This is great for asking Codex to implement frontend changes or follow up on whiteboarding sessions.

    Container caching

    Codex now caches containers to start new tasks and followups 90% faster, dropping the median start time from 48 seconds to 5 seconds. You can optionally configure a maintenance script to update the environment from its cached state to prepare for new tasks. See the docs for more.

    Automatic environment setup

    Now, environments without manual setup scripts automatically run the standard installation commands for common package managers like yarn, pnpm, npm, go mod, gradle, pip, poetry, uv, and cargo. This reduces test failures for new environments by 40%.

  • Codex

    Best of N

    Codex can now generate multiple responses simultaneously for a single task, helping you quickly explore possible solutions to pick the best approach.

    Fixes & improvements

    • Added some keyboard shortcuts and a page to explore them. Open it by pressing ⌘-/ on macOS and Ctrl+/ on other platforms.

    • Added a “branch” query parameter in addition to the existing “environment”, “prompt” and “tab=archived” parameters.

    • Added a loading indicator when downloading a repo during container setup.

    • Added support for cancelling tasks.

    • Fixed issues causing tasks to fail during setup.

    • Fixed issues running followups in environments where the setup script changes files that are gitignored.

    • Improved how the agent understands and reacts to network access restrictions.

    • Increased the update rate of text describing what Codex is doing.

    • Increased the limit for setup script duration to 20 minutes for Pro and Business users.

    • Polished code diffs: You can now option-click a code diff header to expand/collapse all of them.

  • Codex

    June update

    Agent internet access

    Now you can give Codex access to the internet during task execution to install dependencies, upgrade packages, run tests that need external resources, and more.

    Internet access is off by default. Plus, Pro, and Business users can enable it for specific environments, with granular control of which domains and HTTP methods Codex can access. Internet access for Enterprise users is coming soon.

    Learn more about usage and risks in the docs.

    Update existing PRs

    Now you can update existing pull requests when following up on a task.

    Voice dictation

    Now you can dictate tasks to Codex.

    Fixes & improvements

    • Added a link to this changelog from the profile menu.

    • Added support for binary files: When applying patches, all file operations are supported. When using PRs, only deleting or renaming binary files is supported for now.

    • Fixed an issue on iOS where follow up tasks where shown duplicated in the task list.

    • Fixed an issue on iOS where pull request statuses were out of date.

    • Fixed an issue with follow ups where the environments were incorrectly started with the state from the first turn, rather than the most recent state.

    • Fixed internationalization of task events and logs.

    • Improved error messages for setup scripts.

    • Increased the limit on task diffs from 1 MB to 5 MB.

    • Increased the limit for setup script duration from 5 to 10 minutes.

    • Polished GitHub connection flow.

    • Re-enabled Live Activities on iOS after resolving an issue with missed notifications.

    • Removed the mandatory two-factor authentication requirement for users using SSO or social logins.

  • Codex

    Reworked environment page

    It’s now easier and faster to set up code execution.

    Fixes & improvements

    • Added a button to retry failed tasks

    • Added indicators to show that the agent runs without network access after setup

    • Added options to copy git patches after pushing a PR

    • Added support for unicode branch names

    • Fixed a bug where secrets were not piped to the setup script

    • Fixed creating branches when there’s a branch name conflict.

    • Fixed rendering diffs with multi-character emojis.

    • Improved error messages when starting tasks, running setup scripts, pushing PRs, or disconnected from GitHub to be more specific and indicate how to resolve the error.

    • Improved onboarding for teams.

    • Polished how new tasks look while loading.

    • Polished the followup composer.

    • Reduced GitHub disconnects by 90%.

    • Reduced PR creation latency by 35%.

    • Reduced tool call latency by 50%.

    • Reduced task completion latency by 20%.

    • Started setting page titles to task names so Codex tabs are easier to tell apart.

    • Tweaked the system prompt so that agent knows it’s working without network, and can suggest that the user set up dependencies.

    • Updated the docs.

  • Codex

    Codex in the ChatGPT iOS app

    Start tasks, view diffs, and push PRs—while you’re away from your desk.