Teams can now get started with Codex with no fixed monthly costs. For a limited time, eligible ChatGPT Business workspaces can earn up to $500 in credits when their team members start using Codex. View terms or get started.
Pricing options
Free
Explore Codex capabilities on quick coding tasks.
Go
Use Codex for lightweight coding tasks.
Plus
Power a few focused coding sessions each week.
Pro
Rely on Codex for daily full-time development.
API Key
Great for automation in shared environments like CI.
Business
Bring Codex into your startup or growing business.
Enterprise & Edu
Unlock Codex for your entire organization with enterprise-grade functionality.
API Key
Great for automation in shared environments like CI.
Frequently asked questions
What are the usage limits for my plan?
The number of Codex messages you can send depends on the model used, size and complexity of your coding tasks and whether you run them locally or in the cloud. Small scripts or routine functions may consume only a fraction of your allowance, while larger codebases, long-running tasks, or extended sessions that require Codex to hold more context will use significantly more per message.
Local Messages* / 5h | Cloud Tasks* / 5h | Code Reviews / week | |
|---|---|---|---|
| GPT-5.4 | 33-168 | Not available | Not available |
| GPT-5.4-mini | 110-560 | Not available | Not available |
| GPT-5.3-Codex | 45-225 | 10-60 | 10-25 |
*The usage limits for local messages and cloud tasks share a five-hour window. Additional weekly limits may apply. | |||
For Enterprise/Edu users, there are no fixed rate limits - usage scales with credits | |||
Enterprise and Edu plans without flexible pricing have the same per-seat usage limits as Plus for most features | |||
Local Messages* / 5h | Cloud Tasks* / 5h | Code Reviews / week | |
|---|---|---|---|
| GPT-5.4 | 223-1120 | Not available | Not available |
| GPT-5.4-mini | 743-3733 | Not available | Not available |
| GPT-5.3-Codex | 300-1500 | 50-400 | 100-250 |
*The usage limits for local messages and cloud tasks share a five-hour window. Additional weekly limits may apply. | |||
For Enterprise/Edu users, there are no fixed rate limits - usage scales with credits | |||
Enterprise and Edu plans without flexible pricing have the same per-seat usage limits as Plus for most features | |||
Local Messages* / 5h | Cloud Tasks* / 5h | Code Reviews / week | |
|---|---|---|---|
| GPT-5.4 | 15-60 | Not available | Not available |
| GPT-5.4-mini | 40-200 | Not available | Not available |
| GPT-5.3-Codex | 20-90 | 5-40 | 15-30 |
*The usage limits for local messages and cloud tasks share a five-hour window. Additional weekly limits may apply. | |||
For Enterprise/Edu users, there are no fixed rate limits - usage scales with credits | |||
Enterprise and Edu plans without flexible pricing have the same per-seat usage limits as Plus for most features | |||
Local Messages* / 5h | Cloud Tasks* / 5h | Code Reviews / week | |
|---|---|---|---|
| GPT-5.4 | Not available | Not available | |
| GPT-5.4-mini | Not available | Not available | |
| GPT-5.3-Codex | Not available | Not available | |
*The usage limits for local messages and cloud tasks share a five-hour window. Additional weekly limits may apply. | |||
For Enterprise/Edu users, there are no fixed rate limits - usage scales with credits | |||
Enterprise and Edu plans without flexible pricing have the same per-seat usage limits as Plus for most features | |||
Speed configurations increase credit consumption for all applicable models, so they also use included limits faster. Details can be found here. GPT-5.3-Codex-Spark is in research preview for ChatGPT Pro users only, and isn’t available in the API at launch. Because it runs on specialized low-latency hardware, usage is governed by a separate usage limit that may adjust based on demand.
What happens when you hit usage limits?
ChatGPT Plus and Pro users who reach their usage limit can purchase additional credits to continue working without needing to upgrade their existing plan.
Business, Edu, and Enterprise plans with flexible pricing can purchase additional workspace credits to continue using Codex.
If you are approaching usage limits, you can also switch to the GPT-5.4-mini model to make your usage limits last longer.
All users may also run extra local tasks using an API key, with usage charged at standard API rates.
Where can I see my current usage limits?
You can find your current limits in the Codex usage
dashboard. If you want to see your
remaining limits during an active Codex CLI session, you can use /status.
How do credits work?
Credits let you continue using Codex after you reach your included usage limits. Usage draws down from your available credits based on the models and features you use, allowing you to extend work without interruption.
As of April 2nd, we’re moving pricing to API token-based rates. Credits remain the core pricing unit that customers purchase and consume, but usage is based on tokens consumed, calculated as credits per million input tokens, cached input tokens and output tokens your workspace consumes. Read about tokens here.
This format replaces average per-message estimates for your plan with a direct mapping between token usage and credits. It is most useful when you want a clearer view of how input, cached input, and output affect credit consumption.
Under this model, actual credit usage depends on the mix of input, cached input, and output tokens in each task. The new rate card is displayed in the table below, and is currently applicable to new and existing Business customers, and new Enterprise customers.
New and existing customers on all other plan types should continue to use the previous message based rate card, until we migrate you to the new rates in the upcoming weeks.
Select your appropriate plan type in the table below to see rates.
| Credits per 1M tokens | Input Tokens | Cached input tokens | Output Tokens |
|---|---|---|---|
| GPT-5.4 | 62.50 credits | 6.250 credits | 375 credits |
| GPT-5.3-Codex | 43.75 credits | 4.375 credits | 350 credits |
| GPT-5.1-Codex-mini | 6.25 credits | 0.625 credits | 50 credits |
| GPT-5.4-Mini | 18.75 credits | 1.875 credits | 113 credits |
| GPT-5.2-Codex | 43.75 credits | 4.375 credits | 350 credits |
| GPT-5.2 | 43.75 credits | 4.375 credits | 350 credits |
| GPT-5.1-Codex-Max | 31.25 credits | 3.125 credits | 250 credits |
| GPT-5.3-Codex-Spark | research preview | ||
Fast mode consumes 2x as many credits. | |||
Code review runs on 5.3-Codex. | |||
Unit | GPT-5.4 | GPT-5.3-Codex | GPT-5.1-Codex-mini | |
|---|---|---|---|---|
| Local Tasks | 1 message | ~7 credits | ~5 credits | ~1 credit |
| Cloud Tasks | 1 message | ~34 credits | ~25 credits | Not available |
| Code Review | 1 pull request | ~34 credits | ~25 credits | Not available |
Fast mode consumes 2x as many credits. | ||||
These averages also apply to legacy GPT-5.2, GPT-5.2-Codex, GPT-5.1, GPT-5.1-Codex-Max, GPT-5, GPT-5-Codex, and GPT-5-Codex-Mini. | ||||
Speed configurations will increase credit consumption for all models that apply. Details can be found here.
Learn more about credits in ChatGPT Plus and Pro.
Learn more about credits in ChatGPT Business, Enterprise, and Edu.
What counts as Code Review usage?
Code Review usage applies only when Codex runs reviews through GitHub—for
example, when you tag @Codex for review in a pull request or enable automatic
reviews on your repository. Reviews run locally or outside of GitHub count
toward your general usage limits.
What can I do to make my usage limits last longer?
The usage limits and credits above are average rates. You can try the following tips to maximize your limits:
- Control the size of your prompts. Be precise with the instructions you give Codex, but remove unnecessary context.
- Reduce the size of your AGENTS.md. If you work on a larger project, you can control how much context you inject through AGENTS.md files by nesting them within your repository.
- Limit the number of MCP servers you use. Every MCP you add to Codex adds more context to your messages and uses more of your limit. Disable MCP servers when you don’t need them.
- Switch to GPT-5.4-mini for routine tasks. Using the mini model should extend your local-message usage limits by roughly 2.5x to 3.3x, depending on the model you switch from.