Skip to content

Create eval run

evals.runs.create(streval_id, RunCreateParams**kwargs) -> RunCreateResponse
POST/evals/{eval_id}/runs

Kicks off a new run for a given evaluation, specifying the data source, and what model configuration to use to test. The datasource will be validated against the schema specified in the config of the evaluation.

ParametersExpand Collapse
eval_id: str
data_source: DataSource

Details about the run's data source.

Accepts one of the following:
class CreateEvalJSONLRunDataSource: …

A JsonlRunDataSource object with that specifies a JSONL file that matches the eval

source: Source

Determines what populates the item namespace in the data source.

Accepts one of the following:
class SourceFileContent: …
content: List[SourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class SourceFileID: …
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

type: Literal["jsonl"]

The type of data source. Always jsonl.

class CreateEvalCompletionsRunDataSource: …

A CompletionsRunDataSource object describing a model sampling configuration.

source: Source

Determines what populates the item namespace in this run's data source.

Accepts one of the following:
class SourceFileContent: …
content: List[SourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class SourceFileID: …
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

class SourceStoredCompletions: …

A StoredCompletionsRunDataSource configuration describing a set of filters

type: Literal["stored_completions"]

The type of source. Always stored_completions.

created_after: Optional[int]

An optional Unix timestamp to filter items created after this time.

created_before: Optional[int]

An optional Unix timestamp to filter items created before this time.

limit: Optional[int]

An optional maximum number of items to return.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: Optional[str]

An optional model to filter by (e.g., 'gpt-4o').

type: Literal["completions"]

The type of run data source. Always completions.

input_messages: Optional[InputMessages]

Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.

Accepts one of the following:
class InputMessagesTemplate: …
template: List[InputMessagesTemplateTemplate]

A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.

Accepts one of the following:
class EasyInputMessage: …

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: Union[str, ResponseInputMessageContentList]

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
str

A text input to the model.

Accepts one of the following:
class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class ResponseInputImage: …

An image input to the model. Learn about image inputs.

detail: Literal["low", "high", "auto"]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: Literal["input_image"]

The type of the input item. Always input_image.

file_id: Optional[str]

The ID of the file to be sent to the model.

image_url: Optional[str]

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

class ResponseInputFile: …

A file input to the model.

type: Literal["input_file"]

The type of the input item. Always input_file.

file_data: Optional[str]

The content of the file to be sent to the model.

file_id: Optional[str]

The ID of the file to be sent to the model.

file_url: Optional[str]

The URL of the file to be sent to the model.

filename: Optional[str]

The name of the file to be sent to the model.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

class InputMessagesTemplateTemplateEvalItem: …

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: InputMessagesTemplateTemplateEvalItemContent

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class InputMessagesTemplateTemplateEvalItemContentOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class InputMessagesTemplateTemplateEvalItemContentInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

List[GraderInputItem]
Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class GraderInputItemOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class GraderInputItemInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

type: Literal["template"]

The type of input messages. Always template.

class InputMessagesItemReference: …
item_reference: str

A reference to a variable in the item namespace. Ie, "item.input_trajectory"

type: Literal["item_reference"]

The type of input messages. Always item_reference.

model: Optional[str]

The name of the model to use for generating completions (e.g. "o3-mini").

sampling_params: Optional[SamplingParams]
max_completion_tokens: Optional[int]

The maximum number of tokens in the generated output.

reasoning_effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
response_format: Optional[SamplingParamsResponseFormat]

An object specifying the format that the model must output.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
class ResponseFormatText: …

Default response format. Used to generate text responses.

type: Literal["text"]

The type of response format being defined. Always text.

class ResponseFormatJSONSchema: …

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

json_schema: JSONSchema

Structured Outputs configuration options, including a JSON Schema.

name: str

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: Optional[str]

A description of what the response format is for, used by the model to determine how to respond in the format.

schema: Optional[Dict[str, object]]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

type: Literal["json_schema"]

The type of response format being defined. Always json_schema.

class ResponseFormatJSONObject: …

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: Literal["json_object"]

The type of response format being defined. Always json_object.

seed: Optional[int]

A seed value to initialize the randomness, during sampling.

temperature: Optional[float]

A higher temperature increases randomness in the outputs.

tools: Optional[List[ChatCompletionFunctionTool]]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

name: str

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: Optional[str]

A description of what the function does, used by the model to choose when and how to call the function.

parameters: Optional[FunctionParameters]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

type: Literal["function"]

The type of the tool. Currently, only function is supported.

top_p: Optional[float]

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

class DataSourceCreateEvalResponsesRunDataSource: …

A ResponsesRunDataSource object describing a model sampling configuration.

source: DataSourceCreateEvalResponsesRunDataSourceSource

Determines what populates the item namespace in this run's data source.

Accepts one of the following:
class DataSourceCreateEvalResponsesRunDataSourceSourceFileContent: …
content: Iterable[DataSourceCreateEvalResponsesRunDataSourceSourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class DataSourceCreateEvalResponsesRunDataSourceSourceFileID: …
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

class DataSourceCreateEvalResponsesRunDataSourceSourceResponses: …

A EvalResponsesSource object describing a run data source configuration.

type: Literal["responses"]

The type of run data source. Always responses.

created_after: Optional[int]

Only include items created after this timestamp (inclusive). This is a query parameter used to select responses.

minimum0
created_before: Optional[int]

Only include items created before this timestamp (inclusive). This is a query parameter used to select responses.

minimum0
metadata: Optional[object]

Metadata filter for the responses. This is a query parameter used to select responses.

model: Optional[str]

The name of the model to find responses for. This is a query parameter used to select responses.

reasoning_effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
temperature: Optional[float]

Sampling temperature. This is a query parameter used to select responses.

tools: Optional[SequenceNotStr[str]]

List of tool names. This is a query parameter used to select responses.

top_p: Optional[float]

Nucleus sampling parameter. This is a query parameter used to select responses.

users: Optional[SequenceNotStr[str]]

List of user identifiers. This is a query parameter used to select responses.

type: Literal["responses"]

The type of run data source. Always responses.

input_messages: Optional[DataSourceCreateEvalResponsesRunDataSourceInputMessages]

Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.

Accepts one of the following:
class DataSourceCreateEvalResponsesRunDataSourceInputMessagesTemplate: …
template: Iterable[DataSourceCreateEvalResponsesRunDataSourceInputMessagesTemplateTemplate]

A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.

Accepts one of the following:
class DataSourceCreateEvalResponsesRunDataSourceInputMessagesTemplateTemplateChatMessage: …
content: str

The content of the message.

role: str

The role of the message (e.g. "system", "assistant", "user").

class DataSourceCreateEvalResponsesRunDataSourceInputMessagesTemplateTemplateEvalItem: …

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: DataSourceCreateEvalResponsesRunDataSourceInputMessagesTemplateTemplateEvalItemContent

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class DataSourceCreateEvalResponsesRunDataSourceInputMessagesTemplateTemplateEvalItemContentOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class DataSourceCreateEvalResponsesRunDataSourceInputMessagesTemplateTemplateEvalItemContentInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

List[GraderInputItem]

A list of inputs, each of which may be either an input text, output text, input image, or input audio object.

Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class GraderInputItemOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class GraderInputItemInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

type: Literal["template"]

The type of input messages. Always template.

class DataSourceCreateEvalResponsesRunDataSourceInputMessagesItemReference: …
item_reference: str

A reference to a variable in the item namespace. Ie, "item.name"

type: Literal["item_reference"]

The type of input messages. Always item_reference.

model: Optional[str]

The name of the model to use for generating completions (e.g. "o3-mini").

sampling_params: Optional[DataSourceCreateEvalResponsesRunDataSourceSamplingParams]
max_completion_tokens: Optional[int]

The maximum number of tokens in the generated output.

reasoning_effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
seed: Optional[int]

A seed value to initialize the randomness, during sampling.

temperature: Optional[float]

A higher temperature increases randomness in the outputs.

text: Optional[DataSourceCreateEvalResponsesRunDataSourceSamplingParamsText]

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
class ResponseFormatText: …

Default response format. Used to generate text responses.

type: Literal["text"]

The type of response format being defined. Always text.

class ResponseFormatTextJSONSchemaConfig: …

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: str

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Dict[str, object]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: Literal["json_schema"]

The type of response format being defined. Always json_schema.

description: Optional[str]

A description of what the response format is for, used by the model to determine how to respond in the format.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

class ResponseFormatJSONObject: …

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: Literal["json_object"]

The type of response format being defined. Always json_object.

tools: Optional[Iterable[ToolParam]]

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. Learn more about function calling.
Accepts one of the following:
class FunctionTool: …

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: str

The name of the function to call.

parameters: Optional[Dict[str, object]]

A JSON schema object describing the parameters of the function.

strict: Optional[bool]

Whether to enforce strict parameter validation. Default true.

type: Literal["function"]

The type of the function tool. Always function.

description: Optional[str]

A description of the function. Used by the model to determine whether or not to call the function.

class FileSearchTool: …

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: Literal["file_search"]

The type of the file search tool. Always file_search.

vector_store_ids: List[str]

The IDs of the vector stores to search.

filters: Optional[Filters]

A filter to apply.

Accepts one of the following:
class ComparisonFilter: …

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: str

The key to compare against the value.

type: Literal["eq", "ne", "gt", 3 more]

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: Union[str, float, bool, List[Union[str, float]]]

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
str
float
bool
List[Union[str, float]]
Accepts one of the following:
str
float
class CompoundFilter: …

Combine multiple filters using and or or.

filters: List[Filter]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
class ComparisonFilter: …

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: str

The key to compare against the value.

type: Literal["eq", "ne", "gt", 3 more]

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: Union[str, float, bool, List[Union[str, float]]]

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
str
float
bool
List[Union[str, float]]
Accepts one of the following:
str
float
object
type: Literal["and", "or"]

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results: Optional[int]

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: Optional[RankingOptions]

Ranking options for search.

ranker: Optional[Literal["auto", "default-2024-11-15"]]

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold: Optional[float]

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

class ComputerTool: …

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: int

The height of the computer display.

display_width: int

The width of the computer display.

environment: Literal["windows", "mac", "linux", 2 more]

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: Literal["computer_use_preview"]

The type of the computer use tool. Always computer_use_preview.

class WebSearchTool: …

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: Literal["web_search", "web_search_2025_08_26"]

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters: Optional[Filters]

Filters for the search.

allowed_domains: Optional[List[str]]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size: Optional[Literal["low", "medium", "high"]]

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location: Optional[UserLocation]

The approximate location of the user.

city: Optional[str]

Free text input for the city of the user, e.g. San Francisco.

country: Optional[str]

The two-letter ISO country code of the user, e.g. US.

region: Optional[str]

Free text input for the region of the user, e.g. California.

timezone: Optional[str]

The IANA timezone of the user, e.g. America/Los_Angeles.

type: Optional[Literal["approximate"]]

The type of location approximation. Always approximate.

class Mcp: …

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: str

A label for this MCP server, used to identify it in tool calls.

type: Literal["mcp"]

The type of the MCP tool. Always mcp.

allowed_tools: Optional[McpAllowedTools]

List of allowed tool names or a filter object.

Accepts one of the following:
List[str]

A string array of allowed tool names

class McpAllowedToolsMcpToolFilter: …

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

authorization: Optional[str]

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers: Optional[Dict[str, str]]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: Optional[McpRequireApproval]

Specify which of the MCP server's tools require approval.

Accepts one of the following:
class McpRequireApprovalMcpToolApprovalFilter: …

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

Literal["always", "never"]

Specify a single approval policy for all tools. One of always or never. When set to always, all tools will require approval. When set to never, all tools will not require approval.

Accepts one of the following:
"always"
"never"
server_description: Optional[str]

Optional description of the MCP server, used to provide more context.

server_url: Optional[str]

The URL for the MCP server. One of server_url or connector_id must be provided.

class CodeInterpreter: …

A tool that runs Python code to help generate a response to a prompt.

container: CodeInterpreterContainer

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
str

The container ID.

class CodeInterpreterContainerCodeInterpreterToolAuto: …

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: Literal["auto"]

Always auto.

file_ids: Optional[List[str]]

An optional list of uploaded files to make available to your code.

memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: Literal["code_interpreter"]

The type of the code interpreter tool. Always code_interpreter.

class ImageGeneration: …

A tool that generates images using the GPT image models.

type: Literal["image_generation"]

The type of the image generation tool. Always image_generation.

action: Optional[Literal["generate", "edit", "auto"]]

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background: Optional[Literal["transparent", "opaque", "auto"]]

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity: Optional[Literal["high", "low"]]

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask: Optional[ImageGenerationInputImageMask]

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: Optional[str]

File ID for the mask image.

image_url: Optional[str]

Base64-encoded mask image.

model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
str
Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"]

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation: Optional[Literal["auto", "low"]]

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression: Optional[int]

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format: Optional[Literal["png", "webp", "jpeg"]]

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images: Optional[int]

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality: Optional[Literal["low", "medium", "high", "auto"]]

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: Optional[Literal["1024x1024", "1024x1536", "1536x1024", "auto"]]

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
class LocalShell: …

A tool that allows the model to execute shell commands in a local environment.

type: Literal["local_shell"]

The type of the local shell tool. Always local_shell.

class FunctionShellTool: …

A tool that allows the model to execute shell commands.

type: Literal["shell"]

The type of the shell tool. Always shell.

class CustomTool: …

A custom tool that processes input using a specified format. Learn more about custom tools

name: str

The name of the custom tool, used to identify it in tool calls.

type: Literal["custom"]

The type of the custom tool. Always custom.

description: Optional[str]

Optional description of the custom tool, used to provide more context.

format: Optional[CustomToolInputFormat]

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
class Text: …

Unconstrained free-form text.

type: Literal["text"]

Unconstrained text format. Always text.

class Grammar: …

A grammar defined by the user.

definition: str

The grammar definition.

syntax: Literal["lark", "regex"]

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: Literal["grammar"]

Grammar format. Always grammar.

class WebSearchPreviewTool: …

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: Literal["web_search_preview", "web_search_preview_2025_03_11"]

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size: Optional[Literal["low", "medium", "high"]]

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location: Optional[UserLocation]

The user's location.

type: Literal["approximate"]

The type of location approximation. Always approximate.

city: Optional[str]

Free text input for the city of the user, e.g. San Francisco.

country: Optional[str]

The two-letter ISO country code of the user, e.g. US.

region: Optional[str]

Free text input for the region of the user, e.g. California.

timezone: Optional[str]

The IANA timezone of the user, e.g. America/Los_Angeles.

class ApplyPatchTool: …

Allows the assistant to create, delete, or update files using unified diffs.

type: Literal["apply_patch"]

The type of the tool. Always apply_patch.

top_p: Optional[float]

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

name: Optional[str]

The name of the run.

ReturnsExpand Collapse
class RunCreateResponse: …

A schema representing an evaluation run.

id: str

Unique identifier for the evaluation run.

created_at: int

Unix timestamp (in seconds) when the evaluation run was created.

data_source: DataSource

Information about the run's data source.

Accepts one of the following:
class CreateEvalJSONLRunDataSource: …

A JsonlRunDataSource object with that specifies a JSONL file that matches the eval

source: Source

Determines what populates the item namespace in the data source.

Accepts one of the following:
class SourceFileContent: …
content: List[SourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class SourceFileID: …
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

type: Literal["jsonl"]

The type of data source. Always jsonl.

class CreateEvalCompletionsRunDataSource: …

A CompletionsRunDataSource object describing a model sampling configuration.

source: Source

Determines what populates the item namespace in this run's data source.

Accepts one of the following:
class SourceFileContent: …
content: List[SourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class SourceFileID: …
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

class SourceStoredCompletions: …

A StoredCompletionsRunDataSource configuration describing a set of filters

type: Literal["stored_completions"]

The type of source. Always stored_completions.

created_after: Optional[int]

An optional Unix timestamp to filter items created after this time.

created_before: Optional[int]

An optional Unix timestamp to filter items created before this time.

limit: Optional[int]

An optional maximum number of items to return.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: Optional[str]

An optional model to filter by (e.g., 'gpt-4o').

type: Literal["completions"]

The type of run data source. Always completions.

input_messages: Optional[InputMessages]

Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.

Accepts one of the following:
class InputMessagesTemplate: …
template: List[InputMessagesTemplateTemplate]

A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.

Accepts one of the following:
class EasyInputMessage: …

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: Union[str, ResponseInputMessageContentList]

Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.

Accepts one of the following:
str

A text input to the model.

Accepts one of the following:
class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class ResponseInputImage: …

An image input to the model. Learn about image inputs.

detail: Literal["low", "high", "auto"]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

Accepts one of the following:
"low"
"high"
"auto"
type: Literal["input_image"]

The type of the input item. Always input_image.

file_id: Optional[str]

The ID of the file to be sent to the model.

image_url: Optional[str]

The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.

class ResponseInputFile: …

A file input to the model.

type: Literal["input_file"]

The type of the input item. Always input_file.

file_data: Optional[str]

The content of the file to be sent to the model.

file_id: Optional[str]

The ID of the file to be sent to the model.

file_url: Optional[str]

The URL of the file to be sent to the model.

filename: Optional[str]

The name of the file to be sent to the model.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

class InputMessagesTemplateTemplateEvalItem: …

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: InputMessagesTemplateTemplateEvalItemContent

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class InputMessagesTemplateTemplateEvalItemContentOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class InputMessagesTemplateTemplateEvalItemContentInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

List[GraderInputItem]
Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class GraderInputItemOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class GraderInputItemInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

type: Literal["template"]

The type of input messages. Always template.

class InputMessagesItemReference: …
item_reference: str

A reference to a variable in the item namespace. Ie, "item.input_trajectory"

type: Literal["item_reference"]

The type of input messages. Always item_reference.

model: Optional[str]

The name of the model to use for generating completions (e.g. "o3-mini").

sampling_params: Optional[SamplingParams]
max_completion_tokens: Optional[int]

The maximum number of tokens in the generated output.

reasoning_effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
response_format: Optional[SamplingParamsResponseFormat]

An object specifying the format that the model must output.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
class ResponseFormatText: …

Default response format. Used to generate text responses.

type: Literal["text"]

The type of response format being defined. Always text.

class ResponseFormatJSONSchema: …

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

json_schema: JSONSchema

Structured Outputs configuration options, including a JSON Schema.

name: str

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: Optional[str]

A description of what the response format is for, used by the model to determine how to respond in the format.

schema: Optional[Dict[str, object]]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

type: Literal["json_schema"]

The type of response format being defined. Always json_schema.

class ResponseFormatJSONObject: …

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: Literal["json_object"]

The type of response format being defined. Always json_object.

seed: Optional[int]

A seed value to initialize the randomness, during sampling.

temperature: Optional[float]

A higher temperature increases randomness in the outputs.

tools: Optional[List[ChatCompletionFunctionTool]]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

name: str

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: Optional[str]

A description of what the function does, used by the model to choose when and how to call the function.

parameters: Optional[FunctionParameters]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

type: Literal["function"]

The type of the tool. Currently, only function is supported.

top_p: Optional[float]

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

class DataSourceResponses: …

A ResponsesRunDataSource object describing a model sampling configuration.

source: DataSourceResponsesSource

Determines what populates the item namespace in this run's data source.

Accepts one of the following:
class DataSourceResponsesSourceFileContent: …
content: List[DataSourceResponsesSourceFileContentContent]

The content of the jsonl file.

item: Dict[str, object]
sample: Optional[Dict[str, object]]
type: Literal["file_content"]

The type of jsonl source. Always file_content.

class DataSourceResponsesSourceFileID: …
id: str

The identifier of the file.

type: Literal["file_id"]

The type of jsonl source. Always file_id.

class DataSourceResponsesSourceResponses: …

A EvalResponsesSource object describing a run data source configuration.

type: Literal["responses"]

The type of run data source. Always responses.

created_after: Optional[int]

Only include items created after this timestamp (inclusive). This is a query parameter used to select responses.

minimum0
created_before: Optional[int]

Only include items created before this timestamp (inclusive). This is a query parameter used to select responses.

minimum0
metadata: Optional[object]

Metadata filter for the responses. This is a query parameter used to select responses.

model: Optional[str]

The name of the model to find responses for. This is a query parameter used to select responses.

reasoning_effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
temperature: Optional[float]

Sampling temperature. This is a query parameter used to select responses.

tools: Optional[List[str]]

List of tool names. This is a query parameter used to select responses.

top_p: Optional[float]

Nucleus sampling parameter. This is a query parameter used to select responses.

users: Optional[List[str]]

List of user identifiers. This is a query parameter used to select responses.

type: Literal["responses"]

The type of run data source. Always responses.

input_messages: Optional[DataSourceResponsesInputMessages]

Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.

Accepts one of the following:
class DataSourceResponsesInputMessagesTemplate: …
template: List[DataSourceResponsesInputMessagesTemplateTemplate]

A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.

Accepts one of the following:
class DataSourceResponsesInputMessagesTemplateTemplateChatMessage: …
content: str

The content of the message.

role: str

The role of the message (e.g. "system", "assistant", "user").

class DataSourceResponsesInputMessagesTemplateTemplateEvalItem: …

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

content: DataSourceResponsesInputMessagesTemplateTemplateEvalItemContent

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

List[GraderInputItem]
Accepts one of the following:
str

A text input to the model.

class ResponseInputText: …

A text input to the model.

text: str

The text input to the model.

type: Literal["input_text"]

The type of the input item. Always input_text.

class GraderInputItemOutputText: …

A text output from the model.

text: str

The text output from the model.

type: Literal["output_text"]

The type of the output text. Always output_text.

class GraderInputItemInputImage: …

An image input block used within EvalItem content arrays.

image_url: str

The URL of the image input.

type: Literal["input_image"]

The type of the image input. Always input_image.

detail: Optional[str]

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio: …

An audio input to the model.

input_audio: InputAudio
data: str

Base64-encoded audio data.

format: Literal["mp3", "wav"]

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
"mp3"
"wav"
type: Literal["input_audio"]

The type of the input item. Always input_audio.

role: Literal["user", "assistant", "system", "developer"]

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
"user"
"assistant"
"system"
"developer"
type: Optional[Literal["message"]]

The type of the message input. Always message.

type: Literal["template"]

The type of input messages. Always template.

class DataSourceResponsesInputMessagesItemReference: …
item_reference: str

A reference to a variable in the item namespace. Ie, "item.name"

type: Literal["item_reference"]

The type of input messages. Always item_reference.

model: Optional[str]

The name of the model to use for generating completions (e.g. "o3-mini").

sampling_params: Optional[DataSourceResponsesSamplingParams]
max_completion_tokens: Optional[int]

The maximum number of tokens in the generated output.

reasoning_effort: Optional[ReasoningEffort]

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
seed: Optional[int]

A seed value to initialize the randomness, during sampling.

temperature: Optional[float]

A higher temperature increases randomness in the outputs.

text: Optional[DataSourceResponsesSamplingParamsText]

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

format: Optional[ResponseFormatTextConfig]

An object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Accepts one of the following:
class ResponseFormatText: …

Default response format. Used to generate text responses.

type: Literal["text"]

The type of response format being defined. Always text.

class ResponseFormatTextJSONSchemaConfig: …

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

name: str

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

schema: Dict[str, object]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

type: Literal["json_schema"]

The type of response format being defined. Always json_schema.

description: Optional[str]

A description of what the response format is for, used by the model to determine how to respond in the format.

strict: Optional[bool]

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

class ResponseFormatJSONObject: …

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: Literal["json_object"]

The type of response format being defined. Always json_object.

tools: Optional[List[Tool]]

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. Learn more about function calling.
Accepts one of the following:
class FunctionTool: …

Defines a function in your own code the model can choose to call. Learn more about function calling.

name: str

The name of the function to call.

parameters: Optional[Dict[str, object]]

A JSON schema object describing the parameters of the function.

strict: Optional[bool]

Whether to enforce strict parameter validation. Default true.

type: Literal["function"]

The type of the function tool. Always function.

description: Optional[str]

A description of the function. Used by the model to determine whether or not to call the function.

class FileSearchTool: …

A tool that searches for relevant content from uploaded files. Learn more about the file search tool.

type: Literal["file_search"]

The type of the file search tool. Always file_search.

vector_store_ids: List[str]

The IDs of the vector stores to search.

filters: Optional[Filters]

A filter to apply.

Accepts one of the following:
class ComparisonFilter: …

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: str

The key to compare against the value.

type: Literal["eq", "ne", "gt", 3 more]

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: Union[str, float, bool, List[Union[str, float]]]

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
str
float
bool
List[Union[str, float]]
Accepts one of the following:
str
float
class CompoundFilter: …

Combine multiple filters using and or or.

filters: List[Filter]

Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.

Accepts one of the following:
class ComparisonFilter: …

A filter used to compare a specified attribute key to a given value using a defined comparison operation.

key: str

The key to compare against the value.

type: Literal["eq", "ne", "gt", 3 more]

Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.

  • eq: equals
  • ne: not equal
  • gt: greater than
  • gte: greater than or equal
  • lt: less than
  • lte: less than or equal
  • in: in
  • nin: not in
Accepts one of the following:
"eq"
"ne"
"gt"
"gte"
"lt"
"lte"
value: Union[str, float, bool, List[Union[str, float]]]

The value to compare against the attribute key; supports string, number, or boolean types.

Accepts one of the following:
str
float
bool
List[Union[str, float]]
Accepts one of the following:
str
float
object
type: Literal["and", "or"]

Type of operation: and or or.

Accepts one of the following:
"and"
"or"
max_num_results: Optional[int]

The maximum number of results to return. This number should be between 1 and 50 inclusive.

ranking_options: Optional[RankingOptions]

Ranking options for search.

ranker: Optional[Literal["auto", "default-2024-11-15"]]

The ranker to use for the file search.

Accepts one of the following:
"auto"
"default-2024-11-15"
score_threshold: Optional[float]

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

class ComputerTool: …

A tool that controls a virtual computer. Learn more about the computer tool.

display_height: int

The height of the computer display.

display_width: int

The width of the computer display.

environment: Literal["windows", "mac", "linux", 2 more]

The type of computer environment to control.

Accepts one of the following:
"windows"
"mac"
"linux"
"ubuntu"
"browser"
type: Literal["computer_use_preview"]

The type of the computer use tool. Always computer_use_preview.

class WebSearchTool: …

Search the Internet for sources related to the prompt. Learn more about the web search tool.

type: Literal["web_search", "web_search_2025_08_26"]

The type of the web search tool. One of web_search or web_search_2025_08_26.

Accepts one of the following:
"web_search"
"web_search_2025_08_26"
filters: Optional[Filters]

Filters for the search.

allowed_domains: Optional[List[str]]

Allowed domains for the search. If not provided, all domains are allowed. Subdomains of the provided domains are allowed as well.

Example: ["pubmed.ncbi.nlm.nih.gov"]

search_context_size: Optional[Literal["low", "medium", "high"]]

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location: Optional[UserLocation]

The approximate location of the user.

city: Optional[str]

Free text input for the city of the user, e.g. San Francisco.

country: Optional[str]

The two-letter ISO country code of the user, e.g. US.

region: Optional[str]

Free text input for the region of the user, e.g. California.

timezone: Optional[str]

The IANA timezone of the user, e.g. America/Los_Angeles.

type: Optional[Literal["approximate"]]

The type of location approximation. Always approximate.

class Mcp: …

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.

server_label: str

A label for this MCP server, used to identify it in tool calls.

type: Literal["mcp"]

The type of the MCP tool. Always mcp.

allowed_tools: Optional[McpAllowedTools]

List of allowed tool names or a filter object.

Accepts one of the following:
List[str]

A string array of allowed tool names

class McpAllowedToolsMcpToolFilter: …

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

authorization: Optional[str]

An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.

connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]

Identifier for service connectors, like those available in ChatGPT. One of server_url or connector_id must be provided. Learn more about service connectors here.

Currently supported connector_id values are:

  • Dropbox: connector_dropbox
  • Gmail: connector_gmail
  • Google Calendar: connector_googlecalendar
  • Google Drive: connector_googledrive
  • Microsoft Teams: connector_microsoftteams
  • Outlook Calendar: connector_outlookcalendar
  • Outlook Email: connector_outlookemail
  • SharePoint: connector_sharepoint
Accepts one of the following:
"connector_dropbox"
"connector_gmail"
"connector_googlecalendar"
"connector_googledrive"
"connector_microsoftteams"
"connector_outlookcalendar"
"connector_outlookemail"
"connector_sharepoint"
headers: Optional[Dict[str, str]]

Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.

require_approval: Optional[McpRequireApproval]

Specify which of the MCP server's tools require approval.

Accepts one of the following:
class McpRequireApprovalMcpToolApprovalFilter: …

Specify which of the MCP server's tools require approval. Can be always, never, or a filter object associated with tools that require approval.

always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]

A filter object to specify which tools are allowed.

read_only: Optional[bool]

Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint, it will match this filter.

tool_names: Optional[List[str]]

List of allowed tool names.

Literal["always", "never"]

Specify a single approval policy for all tools. One of always or never. When set to always, all tools will require approval. When set to never, all tools will not require approval.

Accepts one of the following:
"always"
"never"
server_description: Optional[str]

Optional description of the MCP server, used to provide more context.

server_url: Optional[str]

The URL for the MCP server. One of server_url or connector_id must be provided.

class CodeInterpreter: …

A tool that runs Python code to help generate a response to a prompt.

container: CodeInterpreterContainer

The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting.

Accepts one of the following:
str

The container ID.

class CodeInterpreterContainerCodeInterpreterToolAuto: …

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.

type: Literal["auto"]

Always auto.

file_ids: Optional[List[str]]

An optional list of uploaded files to make available to your code.

memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]

The memory limit for the code interpreter container.

Accepts one of the following:
"1g"
"4g"
"16g"
"64g"
type: Literal["code_interpreter"]

The type of the code interpreter tool. Always code_interpreter.

class ImageGeneration: …

A tool that generates images using the GPT image models.

type: Literal["image_generation"]

The type of the image generation tool. Always image_generation.

action: Optional[Literal["generate", "edit", "auto"]]

Whether to generate a new image or edit an existing image. Default: auto.

Accepts one of the following:
"generate"
"edit"
"auto"
background: Optional[Literal["transparent", "opaque", "auto"]]

Background type for the generated image. One of transparent, opaque, or auto. Default: auto.

Accepts one of the following:
"transparent"
"opaque"
"auto"
input_fidelity: Optional[Literal["high", "low"]]

Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.

Accepts one of the following:
"high"
"low"
input_image_mask: Optional[ImageGenerationInputImageMask]

Optional mask for inpainting. Contains image_url (string, optional) and file_id (string, optional).

file_id: Optional[str]

File ID for the mask image.

image_url: Optional[str]

Base64-encoded mask image.

model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
str
Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"]

The image generation model to use. Default: gpt-image-1.

Accepts one of the following:
"gpt-image-1"
"gpt-image-1-mini"
"gpt-image-1.5"
moderation: Optional[Literal["auto", "low"]]

Moderation level for the generated image. Default: auto.

Accepts one of the following:
"auto"
"low"
output_compression: Optional[int]

Compression level for the output image. Default: 100.

minimum0
maximum100
output_format: Optional[Literal["png", "webp", "jpeg"]]

The output format of the generated image. One of png, webp, or jpeg. Default: png.

Accepts one of the following:
"png"
"webp"
"jpeg"
partial_images: Optional[int]

Number of partial images to generate in streaming mode, from 0 (default value) to 3.

minimum0
maximum3
quality: Optional[Literal["low", "medium", "high", "auto"]]

The quality of the generated image. One of low, medium, high, or auto. Default: auto.

Accepts one of the following:
"low"
"medium"
"high"
"auto"
size: Optional[Literal["1024x1024", "1024x1536", "1536x1024", "auto"]]

The size of the generated image. One of 1024x1024, 1024x1536, 1536x1024, or auto. Default: auto.

Accepts one of the following:
"1024x1024"
"1024x1536"
"1536x1024"
"auto"
class LocalShell: …

A tool that allows the model to execute shell commands in a local environment.

type: Literal["local_shell"]

The type of the local shell tool. Always local_shell.

class FunctionShellTool: …

A tool that allows the model to execute shell commands.

type: Literal["shell"]

The type of the shell tool. Always shell.

class CustomTool: …

A custom tool that processes input using a specified format. Learn more about custom tools

name: str

The name of the custom tool, used to identify it in tool calls.

type: Literal["custom"]

The type of the custom tool. Always custom.

description: Optional[str]

Optional description of the custom tool, used to provide more context.

format: Optional[CustomToolInputFormat]

The input format for the custom tool. Default is unconstrained text.

Accepts one of the following:
class Text: …

Unconstrained free-form text.

type: Literal["text"]

Unconstrained text format. Always text.

class Grammar: …

A grammar defined by the user.

definition: str

The grammar definition.

syntax: Literal["lark", "regex"]

The syntax of the grammar definition. One of lark or regex.

Accepts one of the following:
"lark"
"regex"
type: Literal["grammar"]

Grammar format. Always grammar.

class WebSearchPreviewTool: …

This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

type: Literal["web_search_preview", "web_search_preview_2025_03_11"]

The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.

Accepts one of the following:
"web_search_preview"
"web_search_preview_2025_03_11"
search_context_size: Optional[Literal["low", "medium", "high"]]

High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

Accepts one of the following:
"low"
"medium"
"high"
user_location: Optional[UserLocation]

The user's location.

type: Literal["approximate"]

The type of location approximation. Always approximate.

city: Optional[str]

Free text input for the city of the user, e.g. San Francisco.

country: Optional[str]

The two-letter ISO country code of the user, e.g. US.

region: Optional[str]

Free text input for the region of the user, e.g. California.

timezone: Optional[str]

The IANA timezone of the user, e.g. America/Los_Angeles.

class ApplyPatchTool: …

Allows the assistant to create, delete, or update files using unified diffs.

type: Literal["apply_patch"]

The type of the tool. Always apply_patch.

top_p: Optional[float]

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

An object representing an error response from the Eval API.

code: str

The error code.

message: str

The error message.

eval_id: str

The identifier of the associated evaluation.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: str

The model that is evaluated, if applicable.

name: str

The name of the evaluation run.

object: Literal["eval.run"]

The type of the object. Always "eval.run".

per_model_usage: List[PerModelUsage]

Usage statistics for each model during the evaluation run.

cached_tokens: int

The number of tokens retrieved from cache.

completion_tokens: int

The number of completion tokens generated.

invocation_count: int

The number of invocations.

model_name: str

The name of the model.

prompt_tokens: int

The number of prompt tokens used.

total_tokens: int

The total number of tokens used.

per_testing_criteria_results: List[PerTestingCriteriaResult]

Results per testing criteria applied during the evaluation run.

failed: int

Number of tests failed for this criteria.

passed: int

Number of tests passed for this criteria.

testing_criteria: str

A description of the testing criteria.

report_url: str

The URL to the rendered evaluation run report on the UI dashboard.

result_counts: ResultCounts

Counters summarizing the outcomes of the evaluation run.

errored: int

Number of output items that resulted in an error.

failed: int

Number of output items that failed to pass the evaluation.

passed: int

Number of output items that passed the evaluation.

total: int

Total number of executed output items.

status: str

The status of the evaluation run.

Create eval run

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY"),  # This is the default and can be omitted
)
run = client.evals.runs.create(
    eval_id="eval_id",
    data_source={
        "source": {
            "content": [{
                "item": {
                    "foo": "bar"
                }
            }],
            "type": "file_content",
        },
        "type": "jsonl",
    },
)
print(run.id)
{
  "id": "id",
  "created_at": 0,
  "data_source": {
    "source": {
      "content": [
        {
          "item": {
            "foo": "bar"
          },
          "sample": {
            "foo": "bar"
          }
        }
      ],
      "type": "file_content"
    },
    "type": "jsonl"
  },
  "error": {
    "code": "code",
    "message": "message"
  },
  "eval_id": "eval_id",
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "name": "name",
  "object": "eval.run",
  "per_model_usage": [
    {
      "cached_tokens": 0,
      "completion_tokens": 0,
      "invocation_count": 0,
      "model_name": "model_name",
      "prompt_tokens": 0,
      "total_tokens": 0
    }
  ],
  "per_testing_criteria_results": [
    {
      "failed": 0,
      "passed": 0,
      "testing_criteria": "testing_criteria"
    }
  ],
  "report_url": "report_url",
  "result_counts": {
    "errored": 0,
    "failed": 0,
    "passed": 0,
    "total": 0
  },
  "status": "status"
}
Returns Examples
{
  "id": "id",
  "created_at": 0,
  "data_source": {
    "source": {
      "content": [
        {
          "item": {
            "foo": "bar"
          },
          "sample": {
            "foo": "bar"
          }
        }
      ],
      "type": "file_content"
    },
    "type": "jsonl"
  },
  "error": {
    "code": "code",
    "message": "message"
  },
  "eval_id": "eval_id",
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "name": "name",
  "object": "eval.run",
  "per_model_usage": [
    {
      "cached_tokens": 0,
      "completion_tokens": 0,
      "invocation_count": 0,
      "model_name": "model_name",
      "prompt_tokens": 0,
      "total_tokens": 0
    }
  ],
  "per_testing_criteria_results": [
    {
      "failed": 0,
      "passed": 0,
      "testing_criteria": "testing_criteria"
    }
  ],
  "report_url": "report_url",
  "result_counts": {
    "errored": 0,
    "failed": 0,
    "passed": 0,
    "total": 0
  },
  "status": "status"
}