Skip to content

Runs

Get eval runs
evals.runs.list(streval_id, RunListParams**kwargs) -> SyncCursorPage[RunListResponse]
GET/evals/{eval_id}/runs
Create eval run
evals.runs.create(streval_id, RunCreateParams**kwargs) -> RunCreateResponse
POST/evals/{eval_id}/runs
Get an eval run
evals.runs.retrieve(strrun_id, RunRetrieveParams**kwargs) -> RunRetrieveResponse
GET/evals/{eval_id}/runs/{run_id}
Cancel eval run
evals.runs.cancel(strrun_id, RunCancelParams**kwargs) -> RunCancelResponse
POST/evals/{eval_id}/runs/{run_id}
Delete eval run
evals.runs.delete(strrun_id, RunDeleteParams**kwargs) -> RunDeleteResponse
DELETE/evals/{eval_id}/runs/{run_id}
ModelsExpand Collapse
class CreateEvalCompletionsRunDataSource:

A CompletionsRunDataSource object describing a model sampling configuration.

source: Source

Determines what populates the item namespace in this run's data source.

Accepts one of the following:
class SourceFileContent:
content: List[SourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class SourceFileID:
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

class SourceStoredCompletions:

A StoredCompletionsRunDataSource configuration describing a set of filters

type: Literal["stored_completions"]

The type of source. Always stored_completions.

created_after: Optional[int]

An optional Unix timestamp to filter items created after this time.

created_before: Optional[int]

An optional Unix timestamp to filter items created before this time.

limit: Optional[int]

An optional maximum number of items to return.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: Optional[str]

An optional model to filter by (e.g., 'gpt-4o').

type: Literal["completions"]

The type of run data source. Always completions.

input_messages: Optional[InputMessages]

Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.

Accepts one of the following:
class InputMessagesTemplate:
template: List[InputMessagesTemplateTemplate]

A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.

Accepts one of the following:
class EasyInputMessage:

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: Union[str, ResponseInputMessageContentList]

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
str

A text input to the model.

Accepts one of the following:
class ResponseInputText:

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class ResponseInputImage:

An image input to the model. Learn about image inputs.

detail: Literal["low", "high", "auto"]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: Literal["input_image"]

The type of the input item. Always input_image.

file_id: Optional[str]

The ID of the file to be sent to the model.

image_url: Optional[str]

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

class ResponseInputFile:

A file input to the model.

type: Literal["input_file"]

The type of the input item. Always input_file.

file_data: Optional[str]

The content of the file to be sent to the model.

file_id: Optional[str]

The ID of the file to be sent to the model.

file_url: Optional[str]

The URL of the file to be sent to the model.

filename: Optional[str]

The name of the file to be sent to the model.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

class InputMessagesTemplateTemplateEvalItem:

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: InputMessagesTemplateTemplateEvalItemContent

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

Accepts one of the following:
str

A text input to the model.

class ResponseInputText:

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class InputMessagesTemplateTemplateEvalItemContentOutputText:

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class InputMessagesTemplateTemplateEvalItemContentInputImage:

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio:

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

List[GraderInputItem]
Accepts one of the following:
str

A text input to the model.

class ResponseInputText:

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class GraderInputItemOutputText:

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class GraderInputItemInputImage:

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio:

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

type: Literal["template"]

The type of input messages. Always template.

class InputMessagesItemReference:
item_reference: str

A reference to a variable in the item namespace. Ie, "item.input_trajectory"

type: Literal["item_reference"]

The type of input messages. Always item_reference.

model: Optional[str]

The name of the model to use for generating completions (e.g. "o3-mini").

sampling_params: Optional[SamplingParams]
max_completion_tokens: Optional[int]

The maximum number of tokens in the generated output.

reasoning_effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
response_format: Optional[SamplingParamsResponseFormat]

An object specifying the format that the model must output.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
class ResponseFormatText:

Default response format. Used to generate text responses.

type: Literal["text"]

The type of response format being defined. Always text.

class ResponseFormatJSONSchema:

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

json_schema: JSONSchema

Structured Outputs configuration options, including a JSON Schema.

name: str

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: Optional[str]

A description of what the response format is for, used by the model to determine how to respond in the format.

schema: Optional[Dict[str, object]]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

type: Literal["json_schema"]

The type of response format being defined. Always json_schema.

class ResponseFormatJSONObject:

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: Literal["json_object"]

The type of response format being defined. Always json_object.

seed: Optional[int]

A seed value to initialize the randomness, during sampling.

temperature: Optional[float]

A higher temperature increases randomness in the outputs.

tools: Optional[List[ChatCompletionFunctionTool]]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

type: Literal["function"]

The type of the tool. Currently, only function is supported.

top_p: Optional[float]

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

class CreateEvalJSONLRunDataSource:

A JsonlRunDataSource object with that specifies a JSONL file that matches the eval

source: Source

Determines what populates the item namespace in the data source.

Accepts one of the following:
class SourceFileContent:
content: List[SourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class SourceFileID:
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

type: Literal["jsonl"]

The type of data source. Always jsonl.

class EvalAPIError:

An object representing an error response from the Eval API.

code: str

The error code.

message: str

The error message.

RunsOutput Items

Get eval run output items
evals.runs.output_items.list(strrun_id, OutputItemListParams**kwargs) -> SyncCursorPage[OutputItemListResponse]
GET/evals/{eval_id}/runs/{run_id}/output_items
Get an output item of an eval run
evals.runs.output_items.retrieve(stroutput_item_id, OutputItemRetrieveParams**kwargs) -> OutputItemRetrieveResponse
GET/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}