Primary navigation

Codex Pricing

Codex is included in your ChatGPT Free, Go, Plus, Pro, Business, Edu, or Enterprise plan

Teams can now get started with Codex with no fixed monthly costs. For a limited time, eligible ChatGPT Business workspaces can earn up to $500 in credits when their team members start using Codex. View terms or get started.

Pricing options

Free

Explore Codex capabilities on quick coding tasks.

$0/month
Get Free

    Go

    Use Codex for lightweight coding tasks.

    $8/month
    Get Go

      Plus

      Power a few focused coding sessions each week.

      $20/month
      Get Plus
      • Codex on the web, in the CLI, in the IDE extension, and on iOS
      • Cloud-based integrations like automatic code review and Slack integration
      • The latest models, including GPT-5.4 and GPT-5.3-Codex
      • GPT-5.4-mini for up to 3.3x higher usage limits for local messages
      • Flexibly extend usage with ChatGPT credits
      • Other ChatGPT features as part of the Plus plan

      Pro

      Rely on Codex for daily full-time development.

      $200/month
      Get Pro
      Everything in Plus and:
      • Priority request processing
      • Access to GPT-5.3-Codex-Spark (research preview), a fast Codex model for day-to-day coding tasks
      • 6x higher usage limits for local and cloud tasks
      • 10x more cloud-based code reviews
      • Other ChatGPT features as part of the Pro plan

      API Key

      Great for automation in shared environments like CI.

      • Codex in the CLI, SDK, or IDE extension
      • No cloud-based features (GitHub code review, Slack, etc.)
      • Delayed access to new models like GPT-5.3-Codex and GPT-5.3-Codex-Spark
      • Pay only for the tokens Codex uses, based on API pricing

      Frequently asked questions

      What are the usage limits for my plan?

      The number of Codex messages you can send depends on the model used, size and complexity of your coding tasks and whether you run them locally or in the cloud. Small scripts or routine functions may consume only a fraction of your allowance, while larger codebases, long-running tasks, or extended sessions that require Codex to hold more context will use significantly more per message.

      Local Messages* / 5h

      Cloud Tasks* / 5h

      Code Reviews / week

      GPT-5.433-168Not availableNot available
      GPT-5.4-mini110-560Not availableNot available
      GPT-5.3-Codex45-22510-6010-25

      *The usage limits for local messages and cloud tasks share a five-hour window. Additional weekly limits may apply.

      For Enterprise/Edu users, there are no fixed rate limits - usage scales with credits

      Enterprise and Edu plans without flexible pricing have the same per-seat usage limits as Plus for most features

      Speed configurations increase credit consumption for all applicable models, so they also use included limits faster. Details can be found here. GPT-5.3-Codex-Spark is in research preview for ChatGPT Pro users only, and isn’t available in the API at launch. Because it runs on specialized low-latency hardware, usage is governed by a separate usage limit that may adjust based on demand.

      What happens when you hit usage limits?

      ChatGPT Plus and Pro users who reach their usage limit can purchase additional credits to continue working without needing to upgrade their existing plan.

      Business, Edu, and Enterprise plans with flexible pricing can purchase additional workspace credits to continue using Codex.

      If you are approaching usage limits, you can also switch to the GPT-5.4-mini model to make your usage limits last longer.

      All users may also run extra local tasks using an API key, with usage charged at standard API rates.

      Where can I see my current usage limits?

      You can find your current limits in the Codex usage dashboard. If you want to see your remaining limits during an active Codex CLI session, you can use /status.

      How do credits work?

      Credits let you continue using Codex after you reach your included usage limits. Usage draws down from your available credits based on the models and features you use, allowing you to extend work without interruption.

      As of April 2nd, we’re moving pricing to API token-based rates. Credits remain the core pricing unit that customers purchase and consume, but usage is based on tokens consumed, calculated as credits per million input tokens, cached input tokens and output tokens your workspace consumes. Read about tokens here.

      This format replaces average per-message estimates for your plan with a direct mapping between token usage and credits. It is most useful when you want a clearer view of how input, cached input, and output affect credit consumption.

      Under this model, actual credit usage depends on the mix of input, cached input, and output tokens in each task. The new rate card is displayed in the table below, and is currently applicable to new and existing Business customers, and new Enterprise customers.

      New and existing customers on all other plan types should continue to use the previous message based rate card, until we migrate you to the new rates in the upcoming weeks.

      Select your appropriate plan type in the table below to see rates.

      Credits per 1M tokens

      Input Tokens

      Cached input tokens

      Output Tokens

      GPT-5.462.50 credits6.250 credits375 credits
      GPT-5.3-Codex43.75 credits4.375 credits350 credits
      GPT-5.1-Codex-mini6.25 credits0.625 credits50 credits
      GPT-5.4-Mini18.75 credits1.875 credits113 credits
      GPT-5.2-Codex43.75 credits4.375 credits350 credits
      GPT-5.243.75 credits4.375 credits350 credits
      GPT-5.1-Codex-Max31.25 credits3.125 credits250 credits
      GPT-5.3-Codex-Spark

      research preview

      Fast mode consumes 2x as many credits.

      Code review runs on 5.3-Codex.

      Speed configurations will increase credit consumption for all models that apply. Details can be found here.

      Learn more about credits in ChatGPT Plus and Pro.

      Learn more about credits in ChatGPT Business, Enterprise, and Edu.

      What counts as Code Review usage?

      Code Review usage applies only when Codex runs reviews through GitHub—for example, when you tag @Codex for review in a pull request or enable automatic reviews on your repository. Reviews run locally or outside of GitHub count toward your general usage limits.

      What can I do to make my usage limits last longer?

      The usage limits and credits above are average rates. You can try the following tips to maximize your limits:

      • Control the size of your prompts. Be precise with the instructions you give Codex, but remove unnecessary context.
      • Reduce the size of your AGENTS.md. If you work on a larger project, you can control how much context you inject through AGENTS.md files by nesting them within your repository.
      • Limit the number of MCP servers you use. Every MCP you add to Codex adds more context to your messages and uses more of your limit. Disable MCP servers when you don’t need them.
      • Switch to GPT-5.4-mini for routine tasks. Using the mini model should extend your local-message usage limits by roughly 2.5x to 3.3x, depending on the model you switch from.