Evals
Manage and run evals in the OpenAI platform.
List evals
Create eval
Get an eval
Update an eval
Delete an eval
ModelsExpand Collapse
EvalCustomDataSourceConfig = object { schema, type } A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for the run data source items. Learn how to build JSON schemas here.
EvalStoredCompletionsDataSourceConfig = object { schema, type, metadata } Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
EvalsRuns
Manage and run evals in the OpenAI platform.
Get eval runs
Create eval run
Get an eval run
Cancel eval run
Delete eval run
ModelsExpand Collapse
CreateEvalCompletionsRunDataSource = object { source, type, input_messages, 2 more } A CompletionsRunDataSource object describing a model sampling configuration.
A CompletionsRunDataSource object describing a model sampling configuration.
source: object { content, type } or object { id, type } or object { type, created_after, created_before, 3 more } Determines what populates the item namespace in this run's data source.
Determines what populates the item namespace in this run's data source.
StoredCompletionsRunDataSource = object { type, created_after, created_before, 3 more } A StoredCompletionsRunDataSource configuration describing a set of filters
A StoredCompletionsRunDataSource configuration describing a set of filters
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
input_messages: optional object { template, type } or object { item_reference, type } Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
TemplateInputMessages = object { template, type }
template: array of EasyInputMessage { content, role, phase, type } or object { content, role, type } A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
EasyInputMessage = object { content, role, phase, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content
types.
A list of one or many input items to the model, containing different content types.
ResponseInputImage = object { detail, type, file_id, image_url } An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: "user" or "assistant" or "system" or "developer"The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: optional "commentary" or "final_answer"Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
EvalMessageObject = object { content, role, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
InputImage = object { image_url, type, detail } An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
GraderInputs = array of string or ResponseInputText { text, type } or object { text, type } or 2 moreA list of inputs, each of which may be either an input text, output text, input
image, or input audio object.
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
InputImage = object { image_url, type, detail } An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
sampling_params: optional object { max_completion_tokens, reasoning_effort, response_format, 4 more }
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
response_format: optional ResponseFormatText { type } or ResponseFormatJSONSchema { json_schema, type } or ResponseFormatJSONObject { type } An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
ResponseFormatJSONSchema = object { json_schema, type } JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
json_schema: object { name, description, schema, strict } Structured Outputs configuration options, including a JSON Schema.
Structured Outputs configuration options, including a JSON Schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the response format is for, used by the model to determine how to respond in the format.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
CreateEvalJSONLRunDataSource = object { source, type } A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
EvalsRunsOutput Items
Manage and run evals in the OpenAI platform.