Codex should work out of the box for most users. But sometimes you want to configure Codex to your own liking to better suit your needs. For this there is a wide range of configuration options.
Codex configuration file
The configuration file for Codex is located at ~/.codex/config.toml.
To access the configuration file when you are using the Codex IDE extension, you can click the gear icon in the top right corner of the extension and then clicking Codex Settings > Open config.toml.
This configuration file is shared between the CLI and the IDE extension and can be used to configure things like the default model, approval policies, sandbox settings or MCP servers that Codex should have access to.
High level configuration options
Codex provides a wide range of configuration options. Some of the most commonly changed settings are:
Default model
Pick which model Codex uses by default in both the CLI and IDE.
Using config.toml:
model = "gpt-5"
Using CLI arguments:
codex --model gpt-5
Model provider
Select the backend provider referenced by the active model. Be sure to define the provider in your config first.
Using config.toml:
model_provider = "ollama"
Using CLI arguments:
codex --config model_provider="ollama"
Approval prompts
Control when Codex pauses to ask before running generated commands.
Using config.toml:
approval_policy = "on-request"
Using CLI arguments:
codex --ask-for-approval on-request
Sandbox level
Adjust how much filesystem and network access Codex has while executing commands.
Using config.toml:
sandbox_mode = "workspace-write"
Using CLI arguments:
codex --sandbox workspace-write
Reasoning depth
Tune how much reasoning effort the model applies when supported.
Using config.toml:
model_reasoning_effort = "high"
Using CLI arguments:
codex --config model_reasoning_effort="high"
Command environment
Restrict or expand which environment variables are forwarded to spawned commands.
Using config.toml:
[shell_environment_policy]
include_only = ["PATH", "HOME"]
Using CLI arguments:
codex --config shell_environment_policy.include_only='["PATH","HOME"]'
Profiles
Profiles bundle a set of configuration values so you can jump between setups without editing config.toml each time. They currently apply to the Codex CLI.
Define profiles under [profiles.<name>] in config.toml and launch the CLI with codex --profile <name>:
model = "gpt-5-codex"
approval_policy = "on-request"
[profiles.deep-review]
model = "gpt-5-pro"
model_reasoning_effort = "high"
approval_policy = "never"
[profiles.lightweight]
model = "gpt-4.1"
approval_policy = "untrusted"
Running codex --profile deep-review will use the gpt-5-pro model with high reasoning effort and no approval policy. Running codex --profile lightweight will use the gpt-4.1 model with untrusted approval policy. To make one profile the default, add profile = "deep-review" at the top level of config.toml; the CLI will load that profile unless you override it on the command line.
Values resolve in this order: explicit CLI flags (like --model) override everything, profile values come next, then root-level entries in config.toml, and finally the CLI’s built-in defaults. Use that precedence to layer common settings at the top level while letting each profile tweak just the fields that need to change.
Feature flags
Optional and experimental capabilities are toggled via the [features] table in config.toml. If Codex emits a deprecation warning mentioning a legacy key (such as experimental_use_exec_command_tool), move that setting into [features] or launch the CLI with codex --enable <feature>.
[features]
streamable_shell = true # enable the streamable exec tool
web_search_request = true # allow the model to request web searches
# view_image_tool defaults to true; omit to keep defaults
Supported features
| Key | Default | Stage | Description |
|---|---|---|---|
unified_exec | false | Experimental | Use the unified PTY-backed exec tool |
streamable_shell | false | Experimental | Use the streamable exec-command/write-stdin pair |
rmcp_client | false | Experimental | Enable OAuth support for streamable HTTP MCP servers |
apply_patch_freeform | false | Beta | Include the freeform apply_patch tool |
view_image_tool | true | Stable | Include the view_image tool |
web_search_request | false | Stable | Allow the model to issue web searches |
experimental_sandbox_command_assessment | false | Experimental | Enable model-based sandbox risk assessment |
ghost_commit | false | Experimental | Create a ghost commit each turn |
enable_experimental_windows_sandbox | false | Experimental | Use the Windows restricted-token sandbox |
Omit feature keys to keep their defaults.
Legacy booleans such as
experimental_use_exec_command_tool,
experimental_use_unified_exec_tool,
include_apply_patch_tool, and similar
experimental_use_* entries are deprecated—migrate them to the matching
[features].<key> flag to avoid repeated warnings.
Personalizing the Codex IDE extension
Additionally to configuring the underlying Codex agent through your config.toml file, you can also configure the way you use the Codex IDE extension.
To see the list of available configuration options, click the gear icon in the top right corner of the extension and then click IDE settings.
To define your own keyboard shortcuts to trigger Codex or add something to the Codex context, you can click the gear icon in the top right corner of the extension and then click Keyboard shortcuts.
Configuration options
| Key | Type / Values | Details |
|---|---|---|
model | string | Model to use (e.g., `gpt-5-codex`). |
model_provider | string | Provider id from `model_providers` (default: `openai`). |
model_context_window | number | Context window tokens available to the active model. |
model_max_output_tokens | number | Maximum number of tokens Codex may request from the model. |
approval_policy | untrusted | on-failure | on-request | never | Controls when Codex pauses for approval before executing commands. |
sandbox_mode | read-only | workspace-write | danger-full-access | Sandbox policy for filesystem and network access during command execution. |
sandbox_workspace_write.writable_roots | array<string> | Additional writable roots when `sandbox_mode = "workspace-write"`. |
sandbox_workspace_write.network_access | boolean | Allow outbound network access inside the workspace-write sandbox. |
sandbox_workspace_write.exclude_tmpdir_env_var | boolean | Exclude `$TMPDIR` from writable roots in workspace-write mode. |
sandbox_workspace_write.exclude_slash_tmp | boolean | Exclude `/tmp` from writable roots in workspace-write mode. |
modelstringModel to use (e.g., `gpt-5-codex`).
model_providerstringProvider id from `model_providers` (default: `openai`).
model_context_windownumberContext window tokens available to the active model.
model_max_output_tokensnumberMaximum number of tokens Codex may request from the model.
approval_policyuntrusted | on-failure | on-request | neverControls when Codex pauses for approval before executing commands.
sandbox_moderead-only | workspace-write | danger-full-accessSandbox policy for filesystem and network access during command execution.
sandbox_workspace_write.writable_rootsarray<string>Additional writable roots when `sandbox_mode = "workspace-write"`.
sandbox_workspace_write.network_accessbooleanAllow outbound network access inside the workspace-write sandbox.
sandbox_workspace_write.exclude_tmpdir_env_varbooleanExclude `$TMPDIR` from writable roots in workspace-write mode.
sandbox_workspace_write.exclude_slash_tmpbooleanExclude `/tmp` from writable roots in workspace-write mode.