Primary navigation

Legacy APIs

o4-mini-deep-research
o4-mini-deep-research
Faster, more affordable deep research model
Reasoning
Speed
Price
$2$8
Input
Output

o4-mini-deep-research is our faster, more affordable deep research model—ideal for tackling complex, multi-step research tasks. It can search and synthesize information from across the internet as well as from your own data, brought in through MCP connectors.

Learn more about how to use this model in our deep research guide.

200,000 context window
100,000 max output tokens
Jun 01, 2024 knowledge cutoff
Reasoning token support
Pricing
Pricing is based on the number of tokens used, or other metrics based on the model type. For tool-specific models, like search and computer use, there’s a fee per tool call. See details in the pricing page.
Text tokens
Per 1M tokens
Batch API price
Input
$2.00
Cached input
$0.50
Output
$8.00
Quick comparison
Input
Cached input
Output
o4-mini-deep-research
$2.00
o3
$2.00
o3-mini
$1.10
Modalities
Text
Input and output
Image
Input only
Audio
Not supported
Video
Not supported
Endpoints
Chat Completions
v1/chat/completions
Responses
v1/responses
Realtime
v1/realtime
Assistants
v1/assistants
Batch
v1/batch
Fine-tuning
v1/fine-tuning
Embeddings
v1/embeddings
Image generation
v1/images/generations
Videos
v1/videos
Image edit
v1/images/edits
Speech generation
v1/audio/speech
Transcription
v1/audio/transcriptions
Translation
v1/audio/translations
Moderation
v1/moderations
Completions (legacy)
v1/completions
Features
Streaming
Supported
Function calling
Not supported
Structured outputs
Not supported
Fine-tuning
Not supported
Distillation
Not supported
Predicted outputs
Not supported
Snapshots
Snapshots let you lock in a specific version of the model so that performance and behavior remain consistent. Below is a list of all available snapshots and aliases for o4-mini-deep-research.
o4-mini-deep-research
o4-mini-deep-research
o4-mini-deep-research-2025-06-26
o4-mini-deep-research-2025-06-26
Rate limits
Rate limits ensure fair and reliable access to the API by placing specific caps on requests or tokens used within a given time period. Your usage tier determines how high these limits are set and automatically increases as you send more requests and spend more on the API.
TierRPMTPMBatch queue limit
FreeNot supported
Tier 11,000200,000200,000
Tier 22,0002,000,000300,000
Tier 35,0004,000,000500,000
Tier 410,00010,000,0002,000,000
Tier 530,000150,000,00010,000,000