GPT-5.4 mini is now available in Codex as a fast, efficient model for lighter coding tasks and subagents.
It improves over GPT-5 mini across coding, reasoning, image understanding, and tool use while running more than 2x faster. In Codex, GPT-5.4 mini uses 30% as much of your included limits as GPT-5.4, so comparable tasks can last about 3.3x longer before you hit those limits.
GPT-5.4 mini is available in the Codex app, the CLI, the IDE extension, and Codex on the web. GPT-5.4 mini is also available in the API.
Use GPT-5.4 mini for codebase exploration, large-file review, processing supporting documents, and other less reasoning-intensive subagent work. For more complex planning, coordination, and final judgment, start with GPT-5.4.
To switch to GPT-5.4 mini:
- In the CLI, start a new thread with:
Or usecodex --model gpt-5.4-mini/modelduring a session. - In the IDE extension, choose GPT-5.4 mini from the model selector in the composer.
- In the Codex app, choose GPT-5.4 mini from the model selector in the composer.
If you don’t see GPT-5.4 mini yet, update the CLI, IDE extension, or Codex app to the latest version.





















