Need
Analysis stack
Default options
pandas with matplotlib or seaborn
Why it's needed
Good defaults for import, profiling, joins, cleaning, and the first round of charts.
Turn messy data into clear analysis and visualizations.
Use Codex to clean data, join sources, explore hypotheses, model results, and package the output as a reusable artifact.
Related links
At its core, data analysis is about using data to inform decisions. The goal isn’t analysis for its own sake. It’s to produce an artifact that helps someone act: a chart for leadership, an experiment readout for a product team, a model evaluation for researchers, or a dashboard that guides daily operations.
A useful framework, popularized by R for Data Science, is a loop: import and tidy data, then iterate between transform, visualize, and model to build understanding before you communicate results. Programming surrounds that whole cycle.
Codex fits well into this workflow. It helps you move around the loop faster by cleaning data, exploring hypotheses, generating analyses, and producing reproducible artifacts. The target isn’t a one-off notebook. The target is a workflow that other people can review, trust, and rerun.
Choose one concrete question you want to answer with your data.
The more specific the question, the better. It will help Codex understand what you want to achieve and how to help you get there.
As an example, we’ll explore the following question:
To what extent are houses near the highway lower in property valuation?
Suppose one dataset contains property values or sale prices, and another contains location, parcel, or highway-proximity information. The work isn’t only to run a model. It’s to make the inputs trustworthy, document the joins, pressure-test the result, and end with an artifact that somebody else can use.
When you start a new data analysis project, you need to set up the environment and define the rules of the project.
To learn more about how to install and use skills, see our skills documentation.
Before touching the data, tell Codex how to behave in the repo. Put personal defaults in ~/.codex/AGENTS.md, and put project rules in the repository AGENTS.md.
A small AGENTS.md is often enough:
## Data analysis defaults
- Use `uv run` or the project's existing Python environment.
- Keep source data in `data/raw/` and write cleaned data to `data/processed/`.
- Put exploratory notebooks in `analysis/` and final artifacts in `output/`.
- Never overwrite raw files.
- Prefer scripts or checked-in notebooks over unnamed scratch cells.
- Before merging datasets, report candidate keys, null rates, and join coverage.
If the repo doesn’t already define a Python environment, ask Codex to create a reproducible setup and explain how to run it. For data analysis work, that step matters more than jumping straight into charts.
Often the fastest way to start is to paste the file path and ask Codex to inspect it. This is where Codex helps you answer basic but important questions:
Don’t ask for conclusions yet. Ask for inventory and explanation first.
Most real work starts here. You have two or more datasets, the primary key isn’t clear, and a naive merge could lose data or create duplicates.
Ask Codex to profile the merge before performing it:
If you need to derive the best key, such as a normalized address, a parcel identifier built from a few columns, or a location join, make Codex explain the tradeoffs and edge cases before you accept the merge.
Exploratory data analysis is where Codex benefits from clean isolation. One worktree can test address cleanup or feature engineering while another focuses on charts or alternate model directions. That keeps each diff reviewable and prevents one long thread from mixing incompatible ideas.
The Codex app includes built-in worktree support. If you are working in a terminal, plain Git worktrees work well too:
git worktree add ../analysis-highway-eda -b analysis/highway-eda
git worktree add ../analysis-model-comparison -b analysis/highway-modeling
In the running example, this step is where you would compare homes near the highway against homes farther away, examine outliers, inspect missing-value patterns, and decide whether the observed effect looks real or reflects neighborhood composition, home size, or other factors.
Not every analysis needs a complex model. Start with an interpretable baseline.
For the highway question, a sensible first pass is a regression or other transparent model that estimates the relationship between highway proximity and property value while controlling for relevant factors such as size, age, and location.
Ask Codex to be explicit about:
If the first model is weak, that’s still useful. It tells you whether the problem is the model, the features, the join quality, or the question itself.
The analysis is only useful when someone else can consume it. Ask Codex to produce the artifact the audience needs:
.docx brief using $doc when formatting and tables matter.$pdf.$vercel-deploy.This is also where you ask for caveats. If the join quality is imperfect, sampling bias is present, or the model assumptions are fragile, Codex should say that plainly in the deliverable.
The curated skills that fit this workflow especially well are:
$spreadsheet for CSV, TSV, and Excel editing or exports.$jupyter-notebook when the deliverable should stay notebook-native.$doc and $pdf for stakeholder-facing outputs.$vercel-deploy when you want to share the result as a URL.Once the workflow stabilizes, create repo-local skills for the repeated parts, such as refresh-data, merge-and-qa, or publish-weekly-report. That’s a better long-term pattern than pasting the same procedural prompt into every thread.
Set Up the Analysis Environment
Load the Dataset and Explain It
Profile the Merge Before You Join
Open a Fresh Exploration Worktree
Build an Interpretable First Model
Package the Results for Stakeholders
Need
Default options
Why it's needed
Need
Analysis stack
Default options
pandas with matplotlib or seaborn
Why it's needed
Good defaults for import, profiling, joins, cleaning, and the first round of charts.
Need
Modeling
Default options
Why it's needed
Start with interpretable baselines before moving to more complex predictive models.
Use Codex to update existing presentations or build new decks by editing slides directly...
Give Codex an evaluation system, such as scripts and reviewable artifacts, so it can keep...
Mention `@Codex` in Slack to start a task tied to the right repo and environment, then...