Primary navigation

Codex Models

Learn about the different models supported by Codex

gpt-5.1-codex
gpt-5.1-codex

Optimized for long-running, agentic coding tasks in Codex.

codex -m gpt-5.1-codex
Capability
Speed
Codex CLI & SDK
Codex IDE Extension
Codex Cloud
ChatGPT Credits
API Access
Default model
macOS, Linux
ChatGPT PlanThe usage limits are shared between all models and don't apply per model.The usage limits are shared between all models and don't apply per model.
Usage limits (every 5h)
ChatGPT Plus
45-225local messages
10-60cloud tasks
ChatGPT Pro
300-1,500local messages
50-400cloud tasks
ChatGPT Business
ChatGPT Enterprise
ChatGPT Edu
API PricingThe usage limits are shared between all models and don't apply per model.The usage limits are shared between all models and don't apply per model.
Per 1M tokens
Input
$1.25
Cached Input
$0.13
Output
$10.00
gpt-5.1-codex-mini
gpt-5.1-codex-mini

Smaller, more cost-effective, less-capable version of GPT-5.1-Codex.

codex -m gpt-5.1-codex-mini
Capability
Speed
Codex CLI & SDK
Codex IDE Extension
Codex Cloud
ChatGPT Credits
API Access
Default model
ChatGPT PlanThe usage limits are shared between all models and don't apply per model.The usage limits are shared between all models and don't apply per model.
Usage limits (every 5h)
ChatGPT Plus
180-900local messages
40-240cloud tasks
ChatGPT Pro
1,200-6,000local messages
200-1,600cloud tasks
ChatGPT Business
ChatGPT Enterprise
ChatGPT Edu
API PricingThe usage limits are shared between all models and don't apply per model.The usage limits are shared between all models and don't apply per model.
Per 1M tokens
Input
$0.25
Cached Input
$0.03
Output
$2.00
gpt-5.1
gpt-5.1

Great for for coding and agentic tasks across domains.

codex -m gpt-5.1
Capability
Speed
Codex CLI & SDK
Codex IDE Extension
Codex Cloud
ChatGPT Credits
API Access
Default model
Windows
ChatGPT PlanThe usage limits are shared between all models and don't apply per model.The usage limits are shared between all models and don't apply per model.
Usage limits (every 5h)
ChatGPT Plus
45-225local messages
10-60cloud tasks
ChatGPT Pro
300-1,500local messages
50-400cloud tasks
ChatGPT Business
ChatGPT Enterprise
ChatGPT Edu
API PricingThe usage limits are shared between all models and don't apply per model.The usage limits are shared between all models and don't apply per model.
Per 1M tokens
Input
$1.25
Cached Input
$0.13
Output
$10.00

Configuring models

Configure your default local model

Both the Codex CLI and Codex IDE Extension use the same config.toml configuration file to set the default model.

To choose your default model, add a model entry into your config.toml. If no entry is set, your version of the Codex CLI or IDE Extension will pick the model.

model="gpt-5.1-codex"

If you regularly switch between different models in the Codex CLI, and want to control more than just the setting, you can also create different Codex profiles.

Choosing temporarily a different local model

In the Codex CLI you can use the /model command during an active session to change the model. In the IDE Extension you can use the model selector next to the input box to choose your model.

To start a brand new Codex CLI session with a specific model or to specify the model for codex exec you can use the --model/-m flag:

codex -m gpt-5.1-codex-mini

Choosing your model for cloud tasks

There is currently no way to control the model for Codex Cloud tasks. It’s currently using gpt-5.1-codex.

Legacy models

gpt-5-codex
gpt-5-codex

Version of GPT-5 tuned for long-running, agentic coding tasks. Succeeded by GPT-5.1-Codex.

codex -m gpt-5-codex
gpt-5-codex-mini
gpt-5-codex-mini

Smaller, more cost-effective version of GPT-5-Codex. Succeeded by GPT-5.1-Codex-Mini.

codex -m gpt-5-codex-mini
gpt-5
gpt-5

Reasoning model for coding and agentic tasks across domains. Succeeded by GPT-5.1.

codex -m gpt-5

Other models

Codex works best with the models listed above.

If you’re authenticating Codex with an API key, you can also point Codex at any model and provider that supports either the Chat Completions or Responses APIs to fit your specific use case.