Primary navigation

Configuring Codex

Learn how to configure your local Codex client

Codex CLICodex IDE Extension

Codex should work out of the box for most users. But sometimes you want to configure Codex to your own liking to better suit your needs. For this there is a wide range of configuration options.

Codex configuration file

The configuration file for Codex is located at ~/.codex/config.toml.

To access the configuration file when you are using the Codex IDE extension, you can click the gear icon in the top right corner of the extension and then clicking Codex Settings > Open config.toml.

This configuration file is shared between the CLI and the IDE extension and can be used to configure things like the default model, approval policies, sandbox settings or MCP servers that Codex should have access to.

High level configuration options

Codex provides a wide range of configuration options. Some of the most commonly changed settings are:

Default model

Pick which model Codex uses by default in both the CLI and IDE.

Using config.toml:

model = "gpt-5"

Using CLI arguments:

codex --model gpt-5

Model provider

Select the backend provider referenced by the active model. Be sure to define the provider in your config first.

Using config.toml:

model_provider = "ollama"

Using CLI arguments:

codex --config model_provider="ollama"

Approval prompts

Control when Codex pauses to ask before running generated commands.

Using config.toml:

approval_policy = "on-request"

Using CLI arguments:

codex --ask-for-approval on-request

Sandbox level

Adjust how much filesystem and network access Codex has while executing commands.

Using config.toml:

sandbox_mode = "workspace-write"

Using CLI arguments:

codex --sandbox workspace-write

Reasoning depth

Tune how much reasoning effort the model applies when supported.

Using config.toml:

model_reasoning_effort = "high"

Using CLI arguments:

codex --config model_reasoning_effort="high"

Command environment

Restrict or expand which environment variables are forwarded to spawned commands.

Using config.toml:

[shell_environment_policy]
include_only = ["PATH", "HOME"]

Using CLI arguments:

codex --config shell_environment_policy.include_only='["PATH","HOME"]'

Profiles

Profiles bundle a set of configuration values so you can jump between setups without editing config.toml each time. They currently apply to the Codex CLI.

Define profiles under [profiles.<name>] in config.toml and launch the CLI with codex --profile <name>:

model = "gpt-5-codex"
approval_policy = "on-request"

[profiles.deep-review]
model = "gpt-5-pro"
model_reasoning_effort = "high"
approval_policy = "never"

[profiles.lightweight]
model = "gpt-4.1"
approval_policy = "untrusted"

Running codex --profile deep-review will use the gpt-5-pro model with high reasoning effort and no approval policy. Running codex --profile lightweight will use the gpt-4.1 model with untrusted approval policy. To make one profile the default, add profile = "deep-review" at the top level of config.toml; the CLI will load that profile unless you override it on the command line.

Values resolve in this order: explicit CLI flags (like --model) override everything, profile values come next, then root-level entries in config.toml, and finally the CLI’s built-in defaults. Use that precedence to layer common settings at the top level while letting each profile tweak just the fields that need to change.

Feature flags

Optional and experimental capabilities are toggled via the [features] table in config.toml. If Codex emits a deprecation warning mentioning a legacy key (such as experimental_use_exec_command_tool), move that setting into [features] or launch the CLI with codex --enable <feature>.

[features]
streamable_shell = true          # enable the streamable exec tool
web_search_request = true        # allow the model to request web searches
# view_image_tool defaults to true; omit to keep defaults

Supported features

KeyDefaultStageDescription
unified_execfalseExperimentalUse the unified PTY-backed exec tool
streamable_shellfalseExperimentalUse the streamable exec-command/write-stdin pair
rmcp_clientfalseExperimentalEnable OAuth support for streamable HTTP MCP servers
apply_patch_freeformfalseBetaInclude the freeform apply_patch tool
view_image_tooltrueStableInclude the view_image tool
web_search_requestfalseStableAllow the model to issue web searches
experimental_sandbox_command_assessmentfalseExperimentalEnable model-based sandbox risk assessment
ghost_commitfalseExperimentalCreate a ghost commit each turn
enable_experimental_windows_sandboxfalseExperimentalUse the Windows restricted-token sandbox

Omit feature keys to keep their defaults.
Legacy booleans such as experimental_use_exec_command_tool, experimental_use_unified_exec_tool, include_apply_patch_tool, and similar experimental_use_* entries are deprecated—migrate them to the matching [features].<key> flag to avoid repeated warnings.

Enabling features quickly

  • In config.toml: add feature_name = true under [features].
  • CLI onetime: codex --enable feature_name.
  • Multiple flags: codex --enable feature_a --enable feature_b.
  • Disable explicitly by setting the key to false in config.toml.

Advanced configuration

Custom model providers

Define additional providers and point model_provider at them:

model = "gpt-4o"
model_provider = "openai-chat-completions"

[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
query_params = {}

[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"

[model_providers.mistral]
name = "Mistral"
base_url = "https://api.mistral.ai/v1"
env_key = "MISTRAL_API_KEY"

Add request headers when needed:

[model_providers.example]
http_headers = { "X-Example-Header" = "example-value" }
env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }

Azure provider & per-provider tuning

[model_providers.azure]
name = "Azure"
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY"
query_params = { api-version = "2025-04-01-preview" }
wire_api = "responses"

[model_providers.openai]
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000

Model reasoning, verbosity, and limits

model_reasoning_summary = "none"          # disable summaries
model_verbosity = "low"                   # shorten responses on Responses API providers
model_supports_reasoning_summaries = true # force reasoning on custom providers
model_context_window = 128000             # override when Codex doesn't know the window
model_max_output_tokens = 4096            # cap completion length

model_verbosity applies only to providers using the Responses API; Chat Completions providers will ignore the setting.

Approval policies and sandbox modes

Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access). See Sandbox & approvals for deeper examples.

approval_policy = "untrusted"   # other options: on-request, on-failure, never
sandbox_mode = "workspace-write"

[sandbox_workspace_write]
exclude_tmpdir_env_var = false  # allow $TMPDIR
exclude_slash_tmp = false       # allow /tmp
writable_roots = ["/Users/YOU/.pyenv/shims"]
network_access = false          # opt in to outbound network

Disable sandboxing entirely (use only if your environment already isolates processes):

sandbox_mode = "danger-full-access"

Shell environment templates

shell_environment_policy controls which environment variables Codex passes to any subprocess it launches (for example, when running a tool-command the model proposes). Start from a clean slate (inherit = "none") or a trimmed set (inherit = "core"), then layer on excludes, includes, and overrides to avoid leaking secrets while still providing the paths, keys, or flags your tasks need.

[shell_environment_policy]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG = "1" }
ignore_default_excludes = false
exclude = ["AWS_*", "AZURE_*"]
include_only = ["PATH", "HOME"]

Patterns are case-insensitive globs (*, ?, [A-Z]); ignore_default_excludes = false keeps the automatic KEY/SECRET/TOKEN filter before your includes/excludes run.

MCP servers

See the dedicated MCP guide for full server setups and toggle descriptions. Below is a minimal STDIO example using the Context7 MCP server:

[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp"]

Observibility and telemetry

Enable OpenTelemetry (Otel) log export to track Codex runs (API requests, SSE/events, prompts, tool approvals/results). Disabled by default; opt in via [otel]:

[otel]
environment = "staging"   # defaults to "dev"
exporter = "none"         # set to otlp-http or otlp-grpc to send events
log_user_prompt = false   # redact user prompts unless explicitly enabled

Choose an exporter:

[otel]
exporter = { otlp-http = {
  endpoint = "https://otel.example.com/v1/logs",
  protocol = "binary",
  headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
[otel]
exporter = { otlp-grpc = {
  endpoint = "https://otel.example.com:4317",
  headers = { "x-otlp-meta" = "abc123" }
}}

If exporter = "none" Codex records events but sends nothing. Exporters batch asynchronously and flush on shutdown. Event metadata includes service name, CLI version, env tag, conversation id, model, sandbox/approval settings, and per-event fields (see Config reference table below).

Notifications

Use notify to trigger an external program whenever Codex emits supported events (today: agent-turn-complete). This is handy for desktop toasts, chat webhooks, CI updates, or any side-channel alerting that the built-in TUI notifications don’t cover.

notify = ["python3", "/path/to/notify.py"]

Example notify.py (truncated) that reacts to agent-turn-complete:

#!/usr/bin/env python3
import json, subprocess, sys

def main() -> int:
    notification = json.loads(sys.argv[1])
    if notification.get("type") != "agent-turn-complete":
        return 0
    title = f"Codex: {notification.get('last-assistant-message', 'Turn Complete!')}"
    message = " ".join(notification.get("input-messages", []))
    subprocess.check_output([
        "terminal-notifier",
        "-title", title,
        "-message", message,
        "-group", "codex-" + notification.get("thread-id", ""),
        "-activate", "com.googlecode.iterm2",
    ])
    return 0

if __name__ == "__main__":
    sys.exit(main())

Place the script somewhere on disk and point notify to it. For lighter in-terminal alerts, toggle tui.notifications instead.

Personalizing the Codex IDE Extension

Additionally to configuring the underlying Codex agent through your config.toml file, you can also configure the way you use the Codex IDE extension.

To see the list of available configuration options, click the gear icon in the top right corner of the extension and then click IDE settings.

To define your own keyboard shortcuts to trigger Codex or add something to the Codex context, you can click the gear icon in the top right corner of the extension and then click Keyboard shortcuts.

Configuration options

Key
model
Type / Values
string
Details

Model to use (e.g., `gpt-5-codex`).

Key
model_provider
Type / Values
string
Details

Provider id from `model_providers` (default: `openai`).

Key
model_context_window
Type / Values
number
Details

Context window tokens available to the active model.

Key
model_max_output_tokens
Type / Values
number
Details

Maximum number of tokens Codex may request from the model.

Key
approval_policy
Type / Values
untrusted | on-failure | on-request | never
Details

Controls when Codex pauses for approval before executing commands.

Key
sandbox_mode
Type / Values
read-only | workspace-write | danger-full-access
Details

Sandbox policy for filesystem and network access during command execution.

Key
sandbox_workspace_write.writable_roots
Type / Values
array<string>
Details

Additional writable roots when `sandbox_mode = "workspace-write"`.

Key
sandbox_workspace_write.network_access
Type / Values
boolean
Details

Allow outbound network access inside the workspace-write sandbox.

Key
sandbox_workspace_write.exclude_tmpdir_env_var
Type / Values
boolean
Details

Exclude `$TMPDIR` from writable roots in workspace-write mode.

Key
sandbox_workspace_write.exclude_slash_tmp
Type / Values
boolean
Details

Exclude `/tmp` from writable roots in workspace-write mode.

Key
notify
Type / Values
array<string>
Details

Command invoked for notifications; receives a JSON payload from Codex.

Key
instructions
Type / Values
string
Details

Reserved for future use; prefer `experimental_instructions_file` or `AGENTS.md`.

Key
mcp_servers.<id>.command
Type / Values
string
Details

Launcher command for an MCP stdio server.

Key
mcp_servers.<id>.args
Type / Values
array<string>
Details

Arguments passed to the MCP stdio server command.

Key
mcp_servers.<id>.env
Type / Values
map<string,string>
Details

Environment variables forwarded to the MCP stdio server.

Key
mcp_servers.<id>.env_vars
Type / Values
array<string>
Details

Additional environment variables to whitelist for an MCP stdio server.

Key
mcp_servers.<id>.cwd
Type / Values
string
Details

Working directory for the MCP stdio server process.

Key
mcp_servers.<id>.url
Type / Values
string
Details

Endpoint for an MCP streamable HTTP server.

Key
mcp_servers.<id>.bearer_token_env_var
Type / Values
string
Details

Environment variable sourcing the bearer token for an MCP HTTP server.

Key
mcp_servers.<id>.http_headers
Type / Values
map<string,string>
Details

Static HTTP headers included with each MCP HTTP request.

Key
mcp_servers.<id>.env_http_headers
Type / Values
map<string,string>
Details

HTTP headers populated from environment variables for an MCP HTTP server.

Key
mcp_servers.<id>.enabled
Type / Values
boolean
Details

Disable an MCP server without removing its configuration.

Key
mcp_servers.<id>.startup_timeout_sec
Type / Values
number
Details

Override the default 10s startup timeout for an MCP server.

Key
mcp_servers.<id>.tool_timeout_sec
Type / Values
number
Details

Override the default 60s per-tool timeout for an MCP server.

Key
mcp_servers.<id>.enabled_tools
Type / Values
array<string>
Details

Allow list of tool names exposed by the MCP server.

Key
mcp_servers.<id>.disabled_tools
Type / Values
array<string>
Details

Deny list applied after `enabled_tools` for the MCP server.

Key
features.unified_exec
Type / Values
boolean
Details

Use the unified PTY-backed exec tool (experimental).

Key
features.streamable_shell
Type / Values
boolean
Details

Switch to the streamable exec command/write-stdin tool pair (experimental).

Key
features.rmcp_client
Type / Values
boolean
Details

Enable the Rust MCP client to unlock OAuth for HTTP servers (experimental).

Key
features.apply_patch_freeform
Type / Values
boolean
Details

Expose the freeform `apply_patch` tool (beta).

Key
features.view_image_tool
Type / Values
boolean
Details

Allow Codex to attach local images via the `view_image` tool (stable; on by default).

Key
features.web_search_request
Type / Values
boolean
Details

Allow the model to issue web searches (stable).

Key
features.experimental_sandbox_command_assessment
Type / Values
boolean
Details

Enable model-based sandbox risk assessment (experimental).

Key
features.ghost_commit
Type / Values
boolean
Details

Create a ghost commit on each turn (experimental).

Key
features.enable_experimental_windows_sandbox
Type / Values
boolean
Details

Run the Windows restricted-token sandbox (experimental).

Key
experimental_use_rmcp_client
Type / Values
boolean
Details

Deprecated; replace with `[features].rmcp_client` or `codex --enable rmcp_client`.

Key
model_providers.<id>.name
Type / Values
string
Details

Display name for a custom model provider.

Key
model_providers.<id>.base_url
Type / Values
string
Details

API base URL for the model provider.

Key
model_providers.<id>.env_key
Type / Values
string
Details

Environment variable supplying the provider API key.

Key
model_providers.<id>.wire_api
Type / Values
chat | responses
Details

Protocol used by the provider (defaults to `chat` if omitted).

Key
model_providers.<id>.query_params
Type / Values
map<string,string>
Details

Extra query parameters appended to provider requests.

Key
model_providers.<id>.http_headers
Type / Values
map<string,string>
Details

Static HTTP headers added to provider requests.

Key
model_providers.<id>.env_http_headers
Type / Values
map<string,string>
Details

HTTP headers populated from environment variables when present.

Key
model_providers.<id>.request_max_retries
Type / Values
number
Details

Retry count for HTTP requests to the provider (default: 4).

Key
model_providers.<id>.stream_max_retries
Type / Values
number
Details

Retry count for SSE streaming interruptions (default: 5).

Key
model_providers.<id>.stream_idle_timeout_ms
Type / Values
number
Details

Idle timeout for SSE streams in milliseconds (default: 300000).

Key
model_reasoning_effort
Type / Values
minimal | low | medium | high
Details

Adjust reasoning effort for supported models (Responses API only).

Key
model_reasoning_summary
Type / Values
auto | concise | detailed | none
Details

Select reasoning summary detail or disable summaries entirely.

Key
model_verbosity
Type / Values
low | medium | high
Details

Control GPT-5 Responses API verbosity (defaults to `medium`).

Key
model_supports_reasoning_summaries
Type / Values
boolean
Details

Force Codex to send reasoning metadata even for unknown models.

Key
model_reasoning_summary_format
Type / Values
none | experimental
Details

Override the format of reasoning summaries (experimental).

Key
shell_environment_policy.inherit
Type / Values
all | core | none
Details

Baseline environment inheritance when spawning subprocesses.

Key
shell_environment_policy.ignore_default_excludes
Type / Values
boolean
Details

Keep variables containing KEY/SECRET/TOKEN before other filters run.

Key
shell_environment_policy.exclude
Type / Values
array<string>
Details

Glob patterns for removing environment variables after the defaults.

Key
shell_environment_policy.include_only
Type / Values
array<string>
Details

Whitelist of patterns; when set only matching variables are kept.

Key
shell_environment_policy.set
Type / Values
map<string,string>
Details

Explicit environment overrides injected into every subprocess.

Key
project_doc_max_bytes
Type / Values
number
Details

Maximum bytes read from `AGENTS.md` when building project instructions.

Key
project_doc_fallback_filenames
Type / Values
array<string>
Details

Additional filenames to try when `AGENTS.md` is missing.

Key
profile
Type / Values
string
Details

Default profile applied at startup (equivalent to `--profile`).

Key
profiles.<name>.*
Type / Values
various
Details

Profile-scoped overrides for any of the supported configuration keys.

Key
history.persistence
Type / Values
save-all | none
Details

Control whether Codex saves session transcripts to history.jsonl.

Key
history.max_bytes
Type / Values
number
Details

Reserved for future use; currently not enforced.

Key
file_opener
Type / Values
vscode | vscode-insiders | windsurf | cursor | none
Details

URI scheme used to open citations from Codex output (default: `vscode`).

Key
otel.environment
Type / Values
string
Details

Environment tag applied to emitted OpenTelemetry events (default: `dev`).

Key
otel.exporter
Type / Values
none | otlp-http | otlp-grpc
Details

Select the OpenTelemetry exporter and provide any endpoint metadata.

Key
otel.log_user_prompt
Type / Values
boolean
Details

Opt in to exporting raw user prompts with OpenTelemetry logs.

Key
tui
Type / Values
table
Details

TUI-specific options such as enabling inline desktop notifications.

Key
tui.notifications
Type / Values
boolean | array<string>
Details

Enable TUI notifications; optionally restrict to specific event types.

Key
hide_agent_reasoning
Type / Values
boolean
Details

Suppress reasoning events in both the TUI and `codex exec` output.

Key
show_raw_agent_reasoning
Type / Values
boolean
Details

Surface raw reasoning content when the active model emits it.

Key
chatgpt_base_url
Type / Values
string
Details

Override the base URL used during the ChatGPT login flow.

Key
experimental_instructions_file
Type / Values
string (path)
Details

Experimental replacement for built-in instructions instead of `AGENTS.md`.

Key
experimental_use_exec_command_tool
Type / Values
boolean
Details

Deprecated; use `[features].unified_exec` or `codex --enable unified_exec`.

Key
projects.<path>.trust_level
Type / Values
string
Details

Mark a project or worktree as trusted (only `"trusted"` is recognized).

Key
tools.web_search
Type / Values
boolean
Details

Deprecated; use `[features].web_search_request` or `codex --enable web_search_request`.

Key
tools.view_image
Type / Values
boolean
Details

Deprecated; use `[features].view_image_tool` or `codex --enable view_image_tool`.

Key
forced_login_method
Type / Values
chatgpt | api
Details

Restrict Codex to a specific authentication method.

Key
forced_chatgpt_workspace_id
Type / Values
string (uuid)
Details

Limit ChatGPT logins to a specific workspace identifier.