Evals
Manage and run evals in the OpenAI platform.
Update an eval
ModelsExpand Collapse
class EvalCustomDataSourceConfig: …A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for the run data source items. Learn how to build JSON schemas here.
class EvalStoredCompletionsDataSourceConfig: …Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
class EvalListResponse: …An Eval object with a data source config and testing criteria.
An Eval represents a task to be done for your LLM integration.
Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
data_source_config: DataSourceConfigConfiguration of data sources used in runs of the evaluation.
Configuration of data sources used in runs of the evaluation.
class EvalCustomDataSourceConfig: …A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for the run data source items. Learn how to build JSON schemas here.
class DataSourceConfigLogs: …A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
class EvalStoredCompletionsDataSourceConfig: …Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
testing_criteria: List[TestingCriterion]A list of testing criteria.
A list of testing criteria.
class LabelModelGrader: …A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
input: List[Input]
content: InputContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
List[GraderInputItem]
class StringCheckGrader: …A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
class TestingCriterionEvalGraderTextSimilarity: …A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
class EvalCreateResponse: …An Eval object with a data source config and testing criteria.
An Eval represents a task to be done for your LLM integration.
Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
data_source_config: DataSourceConfigConfiguration of data sources used in runs of the evaluation.
Configuration of data sources used in runs of the evaluation.
class EvalCustomDataSourceConfig: …A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for the run data source items. Learn how to build JSON schemas here.
class DataSourceConfigLogs: …A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
class EvalStoredCompletionsDataSourceConfig: …Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
testing_criteria: List[TestingCriterion]A list of testing criteria.
A list of testing criteria.
class LabelModelGrader: …A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
input: List[Input]
content: InputContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
List[GraderInputItem]
class StringCheckGrader: …A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
class TestingCriterionEvalGraderTextSimilarity: …A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
class EvalRetrieveResponse: …An Eval object with a data source config and testing criteria.
An Eval represents a task to be done for your LLM integration.
Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
data_source_config: DataSourceConfigConfiguration of data sources used in runs of the evaluation.
Configuration of data sources used in runs of the evaluation.
class EvalCustomDataSourceConfig: …A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for the run data source items. Learn how to build JSON schemas here.
class DataSourceConfigLogs: …A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
class EvalStoredCompletionsDataSourceConfig: …Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
testing_criteria: List[TestingCriterion]A list of testing criteria.
A list of testing criteria.
class LabelModelGrader: …A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
input: List[Input]
content: InputContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
List[GraderInputItem]
class StringCheckGrader: …A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
class TestingCriterionEvalGraderTextSimilarity: …A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
class EvalUpdateResponse: …An Eval object with a data source config and testing criteria.
An Eval represents a task to be done for your LLM integration.
Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
data_source_config: DataSourceConfigConfiguration of data sources used in runs of the evaluation.
Configuration of data sources used in runs of the evaluation.
class EvalCustomDataSourceConfig: …A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for the run data source items. Learn how to build JSON schemas here.
class DataSourceConfigLogs: …A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
class EvalStoredCompletionsDataSourceConfig: …Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The json schema for the run data source items. Learn how to build JSON schemas here.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
testing_criteria: List[TestingCriterion]A list of testing criteria.
A list of testing criteria.
class LabelModelGrader: …A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
input: List[Input]
content: InputContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
List[GraderInputItem]
class StringCheckGrader: …A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
class TestingCriterionEvalGraderTextSimilarity: …A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
EvalsRuns
Manage and run evals in the OpenAI platform.
Get eval runs
Create eval run
Get an eval run
Cancel eval run
Delete eval run
ModelsExpand Collapse
class CreateEvalCompletionsRunDataSource: …A CompletionsRunDataSource object describing a model sampling configuration.
A CompletionsRunDataSource object describing a model sampling configuration.
source: SourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class SourceStoredCompletions: …A StoredCompletionsRunDataSource configuration describing a set of filters
A StoredCompletionsRunDataSource configuration describing a set of filters
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
input_messages: Optional[InputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class InputMessagesTemplate: …
template: List[InputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class EasyInputMessage: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
List[ResponseInputContent]
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: Literal["user", "assistant", "system", "developer"]The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: Optional[Literal["commentary", "final_answer"]]Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
class InputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: InputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class InputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[SamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
response_format: Optional[SamplingParamsResponseFormat]An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
class ResponseFormatJSONSchema: …JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
json_schema: JSONSchemaStructured Outputs configuration options, including a JSON Schema.
Structured Outputs configuration options, including a JSON Schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the response format is for, used by the model to determine how to respond in the format.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
class RunListResponse: …A schema representing an evaluation run.
A schema representing an evaluation run.
data_source: DataSourceInformation about the run’s data source.
Information about the run’s data source.
class CreateEvalJSONLRunDataSource: …A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
class CreateEvalCompletionsRunDataSource: …A CompletionsRunDataSource object describing a model sampling configuration.
A CompletionsRunDataSource object describing a model sampling configuration.
source: SourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class SourceStoredCompletions: …A StoredCompletionsRunDataSource configuration describing a set of filters
A StoredCompletionsRunDataSource configuration describing a set of filters
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
input_messages: Optional[InputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class InputMessagesTemplate: …
template: List[InputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class EasyInputMessage: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
List[ResponseInputContent]
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: Literal["user", "assistant", "system", "developer"]The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: Optional[Literal["commentary", "final_answer"]]Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
class InputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: InputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class InputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[SamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
response_format: Optional[SamplingParamsResponseFormat]An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
class ResponseFormatJSONSchema: …JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
json_schema: JSONSchemaStructured Outputs configuration options, including a JSON Schema.
Structured Outputs configuration options, including a JSON Schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the response format is for, used by the model to determine how to respond in the format.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
class DataSourceResponses: …A ResponsesRunDataSource object describing a model sampling configuration.
A ResponsesRunDataSource object describing a model sampling configuration.
source: DataSourceResponsesSourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class DataSourceResponsesSourceResponses: …A EvalResponsesSource object describing a run data source configuration.
A EvalResponsesSource object describing a run data source configuration.
Only include items created after this timestamp (inclusive). This is a query parameter used to select responses.
Only include items created before this timestamp (inclusive). This is a query parameter used to select responses.
Optional string to search the ‘instructions’ field. This is a query parameter used to select responses.
Metadata filter for the responses. This is a query parameter used to select responses.
The name of the model to find responses for. This is a query parameter used to select responses.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
Sampling temperature. This is a query parameter used to select responses.
input_messages: Optional[DataSourceResponsesInputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class DataSourceResponsesInputMessagesTemplate: …
template: List[DataSourceResponsesInputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentOutputText: …A text output from the model.
A text output from the model.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[DataSourceResponsesSamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
text: Optional[DataSourceResponsesSamplingParamsText]Configuration options for a text response from the model. Can be plain
text or structured JSON data. Learn more:
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the model’s capabilities, like web search or file search. Learn more about built-in tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. Learn more about function calling.
class FunctionTool: …Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
class FileSearchTool: …A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
filters: Optional[Filters]A filter to apply.
A filter to apply.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
class CompoundFilter: …Combine multiple filters using and or or.
Combine multiple filters using and or or.
filters: List[Filter]Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: Optional[RankingOptions]Ranking options for search.
Ranking options for search.
class ComputerTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class ComputerUsePreviewTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class WebSearchTool: …Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: Literal["web_search", "web_search_2025_08_26"]The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
class Mcp: …Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: Optional[McpAllowedTools]List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
class McpAllowedToolsMcpToolFilter: …A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: Optional[McpRequireApproval]Specify which of the MCP server’s tools require approval.
Specify which of the MCP server’s tools require approval.
class McpRequireApprovalMcpToolApprovalFilter: …Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
class CodeInterpreter: …A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: CodeInterpreterContainerThe code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
class CodeInterpreterContainerCodeInterpreterToolAuto: …Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: Optional[CodeInterpreterContainerCodeInterpreterToolAutoNetworkPolicy]Network access policy for the container.
Network access policy for the container.
class ImageGeneration: …A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: Optional[Literal["generate", "edit", "auto"]]Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: Optional[Literal["transparent", "opaque", "auto"]]Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: Optional[Literal["high", "low"]]Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: Optional[ImageGenerationInputImageMask]Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
moderation: Optional[Literal["auto", "low"]]Moderation level for the generated image. Default: auto.
Moderation level for the generated image. Default: auto.
Compression level for the output image. Default: 100.
output_format: Optional[Literal["png", "webp", "jpeg"]]The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
class FunctionShellTool: …A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: Optional[Environment]
class ContainerAuto: …
network_policy: Optional[NetworkPolicy]Network access policy for the container.
Network access policy for the container.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class NamespaceTool: …Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: List[Tool]The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class ToolSearchTool: …Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
class WebSearchPreviewTool: …This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: Literal["web_search_preview", "web_search_preview_2025_03_11"]The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The user’s location.
The user’s location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
per_testing_criteria_results: List[PerTestingCriteriaResult]Results per testing criteria applied during the evaluation run.
Results per testing criteria applied during the evaluation run.
class RunCreateResponse: …A schema representing an evaluation run.
A schema representing an evaluation run.
data_source: DataSourceInformation about the run’s data source.
Information about the run’s data source.
class CreateEvalJSONLRunDataSource: …A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
class CreateEvalCompletionsRunDataSource: …A CompletionsRunDataSource object describing a model sampling configuration.
A CompletionsRunDataSource object describing a model sampling configuration.
source: SourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class SourceStoredCompletions: …A StoredCompletionsRunDataSource configuration describing a set of filters
A StoredCompletionsRunDataSource configuration describing a set of filters
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
input_messages: Optional[InputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class InputMessagesTemplate: …
template: List[InputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class EasyInputMessage: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
List[ResponseInputContent]
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: Literal["user", "assistant", "system", "developer"]The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: Optional[Literal["commentary", "final_answer"]]Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
class InputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: InputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class InputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[SamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
response_format: Optional[SamplingParamsResponseFormat]An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
class ResponseFormatJSONSchema: …JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
json_schema: JSONSchemaStructured Outputs configuration options, including a JSON Schema.
Structured Outputs configuration options, including a JSON Schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the response format is for, used by the model to determine how to respond in the format.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
class DataSourceResponses: …A ResponsesRunDataSource object describing a model sampling configuration.
A ResponsesRunDataSource object describing a model sampling configuration.
source: DataSourceResponsesSourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class DataSourceResponsesSourceResponses: …A EvalResponsesSource object describing a run data source configuration.
A EvalResponsesSource object describing a run data source configuration.
Only include items created after this timestamp (inclusive). This is a query parameter used to select responses.
Only include items created before this timestamp (inclusive). This is a query parameter used to select responses.
Optional string to search the ‘instructions’ field. This is a query parameter used to select responses.
Metadata filter for the responses. This is a query parameter used to select responses.
The name of the model to find responses for. This is a query parameter used to select responses.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
Sampling temperature. This is a query parameter used to select responses.
input_messages: Optional[DataSourceResponsesInputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class DataSourceResponsesInputMessagesTemplate: …
template: List[DataSourceResponsesInputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentOutputText: …A text output from the model.
A text output from the model.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[DataSourceResponsesSamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
text: Optional[DataSourceResponsesSamplingParamsText]Configuration options for a text response from the model. Can be plain
text or structured JSON data. Learn more:
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the model’s capabilities, like web search or file search. Learn more about built-in tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. Learn more about function calling.
class FunctionTool: …Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
class FileSearchTool: …A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
filters: Optional[Filters]A filter to apply.
A filter to apply.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
class CompoundFilter: …Combine multiple filters using and or or.
Combine multiple filters using and or or.
filters: List[Filter]Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: Optional[RankingOptions]Ranking options for search.
Ranking options for search.
class ComputerTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class ComputerUsePreviewTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class WebSearchTool: …Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: Literal["web_search", "web_search_2025_08_26"]The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
class Mcp: …Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: Optional[McpAllowedTools]List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
class McpAllowedToolsMcpToolFilter: …A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: Optional[McpRequireApproval]Specify which of the MCP server’s tools require approval.
Specify which of the MCP server’s tools require approval.
class McpRequireApprovalMcpToolApprovalFilter: …Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
class CodeInterpreter: …A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: CodeInterpreterContainerThe code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
class CodeInterpreterContainerCodeInterpreterToolAuto: …Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: Optional[CodeInterpreterContainerCodeInterpreterToolAutoNetworkPolicy]Network access policy for the container.
Network access policy for the container.
class ImageGeneration: …A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: Optional[Literal["generate", "edit", "auto"]]Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: Optional[Literal["transparent", "opaque", "auto"]]Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: Optional[Literal["high", "low"]]Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: Optional[ImageGenerationInputImageMask]Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
moderation: Optional[Literal["auto", "low"]]Moderation level for the generated image. Default: auto.
Moderation level for the generated image. Default: auto.
Compression level for the output image. Default: 100.
output_format: Optional[Literal["png", "webp", "jpeg"]]The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
class FunctionShellTool: …A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: Optional[Environment]
class ContainerAuto: …
network_policy: Optional[NetworkPolicy]Network access policy for the container.
Network access policy for the container.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class NamespaceTool: …Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: List[Tool]The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class ToolSearchTool: …Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
class WebSearchPreviewTool: …This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: Literal["web_search_preview", "web_search_preview_2025_03_11"]The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The user’s location.
The user’s location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
per_testing_criteria_results: List[PerTestingCriteriaResult]Results per testing criteria applied during the evaluation run.
Results per testing criteria applied during the evaluation run.
class RunRetrieveResponse: …A schema representing an evaluation run.
A schema representing an evaluation run.
data_source: DataSourceInformation about the run’s data source.
Information about the run’s data source.
class CreateEvalJSONLRunDataSource: …A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
class CreateEvalCompletionsRunDataSource: …A CompletionsRunDataSource object describing a model sampling configuration.
A CompletionsRunDataSource object describing a model sampling configuration.
source: SourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class SourceStoredCompletions: …A StoredCompletionsRunDataSource configuration describing a set of filters
A StoredCompletionsRunDataSource configuration describing a set of filters
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
input_messages: Optional[InputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class InputMessagesTemplate: …
template: List[InputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class EasyInputMessage: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
List[ResponseInputContent]
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: Literal["user", "assistant", "system", "developer"]The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: Optional[Literal["commentary", "final_answer"]]Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
class InputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: InputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class InputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[SamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
response_format: Optional[SamplingParamsResponseFormat]An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
class ResponseFormatJSONSchema: …JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
json_schema: JSONSchemaStructured Outputs configuration options, including a JSON Schema.
Structured Outputs configuration options, including a JSON Schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the response format is for, used by the model to determine how to respond in the format.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
class DataSourceResponses: …A ResponsesRunDataSource object describing a model sampling configuration.
A ResponsesRunDataSource object describing a model sampling configuration.
source: DataSourceResponsesSourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class DataSourceResponsesSourceResponses: …A EvalResponsesSource object describing a run data source configuration.
A EvalResponsesSource object describing a run data source configuration.
Only include items created after this timestamp (inclusive). This is a query parameter used to select responses.
Only include items created before this timestamp (inclusive). This is a query parameter used to select responses.
Optional string to search the ‘instructions’ field. This is a query parameter used to select responses.
Metadata filter for the responses. This is a query parameter used to select responses.
The name of the model to find responses for. This is a query parameter used to select responses.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
Sampling temperature. This is a query parameter used to select responses.
input_messages: Optional[DataSourceResponsesInputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class DataSourceResponsesInputMessagesTemplate: …
template: List[DataSourceResponsesInputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentOutputText: …A text output from the model.
A text output from the model.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[DataSourceResponsesSamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
text: Optional[DataSourceResponsesSamplingParamsText]Configuration options for a text response from the model. Can be plain
text or structured JSON data. Learn more:
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the model’s capabilities, like web search or file search. Learn more about built-in tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. Learn more about function calling.
class FunctionTool: …Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
class FileSearchTool: …A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
filters: Optional[Filters]A filter to apply.
A filter to apply.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
class CompoundFilter: …Combine multiple filters using and or or.
Combine multiple filters using and or or.
filters: List[Filter]Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: Optional[RankingOptions]Ranking options for search.
Ranking options for search.
class ComputerTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class ComputerUsePreviewTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class WebSearchTool: …Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: Literal["web_search", "web_search_2025_08_26"]The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
class Mcp: …Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: Optional[McpAllowedTools]List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
class McpAllowedToolsMcpToolFilter: …A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: Optional[McpRequireApproval]Specify which of the MCP server’s tools require approval.
Specify which of the MCP server’s tools require approval.
class McpRequireApprovalMcpToolApprovalFilter: …Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
class CodeInterpreter: …A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: CodeInterpreterContainerThe code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
class CodeInterpreterContainerCodeInterpreterToolAuto: …Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: Optional[CodeInterpreterContainerCodeInterpreterToolAutoNetworkPolicy]Network access policy for the container.
Network access policy for the container.
class ImageGeneration: …A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: Optional[Literal["generate", "edit", "auto"]]Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: Optional[Literal["transparent", "opaque", "auto"]]Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: Optional[Literal["high", "low"]]Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: Optional[ImageGenerationInputImageMask]Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
moderation: Optional[Literal["auto", "low"]]Moderation level for the generated image. Default: auto.
Moderation level for the generated image. Default: auto.
Compression level for the output image. Default: 100.
output_format: Optional[Literal["png", "webp", "jpeg"]]The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
class FunctionShellTool: …A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: Optional[Environment]
class ContainerAuto: …
network_policy: Optional[NetworkPolicy]Network access policy for the container.
Network access policy for the container.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class NamespaceTool: …Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: List[Tool]The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class ToolSearchTool: …Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
class WebSearchPreviewTool: …This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: Literal["web_search_preview", "web_search_preview_2025_03_11"]The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The user’s location.
The user’s location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
per_testing_criteria_results: List[PerTestingCriteriaResult]Results per testing criteria applied during the evaluation run.
Results per testing criteria applied during the evaluation run.
class RunCancelResponse: …A schema representing an evaluation run.
A schema representing an evaluation run.
data_source: DataSourceInformation about the run’s data source.
Information about the run’s data source.
class CreateEvalJSONLRunDataSource: …A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
class CreateEvalCompletionsRunDataSource: …A CompletionsRunDataSource object describing a model sampling configuration.
A CompletionsRunDataSource object describing a model sampling configuration.
source: SourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class SourceStoredCompletions: …A StoredCompletionsRunDataSource configuration describing a set of filters
A StoredCompletionsRunDataSource configuration describing a set of filters
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
input_messages: Optional[InputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class InputMessagesTemplate: …
template: List[InputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class EasyInputMessage: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response.
Can also contain previous assistant responses.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
List[ResponseInputContent]
class ResponseInputImage: …An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs.
role: Literal["user", "assistant", "system", "developer"]The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
phase: Optional[Literal["commentary", "final_answer"]]Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
class InputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: InputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class InputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[SamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
response_format: Optional[SamplingParamsResponseFormat]An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
class ResponseFormatJSONSchema: …JSON Schema response format. Used to generate structured JSON responses.
Learn more about Structured Outputs.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
json_schema: JSONSchemaStructured Outputs configuration options, including a JSON Schema.
Structured Outputs configuration options, including a JSON Schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the response format is for, used by the model to determine how to respond in the format.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
class DataSourceResponses: …A ResponsesRunDataSource object describing a model sampling configuration.
A ResponsesRunDataSource object describing a model sampling configuration.
source: DataSourceResponsesSourceDetermines what populates the item namespace in this run’s data source.
Determines what populates the item namespace in this run’s data source.
class DataSourceResponsesSourceResponses: …A EvalResponsesSource object describing a run data source configuration.
A EvalResponsesSource object describing a run data source configuration.
Only include items created after this timestamp (inclusive). This is a query parameter used to select responses.
Only include items created before this timestamp (inclusive). This is a query parameter used to select responses.
Optional string to search the ‘instructions’ field. This is a query parameter used to select responses.
Metadata filter for the responses. This is a query parameter used to select responses.
The name of the model to find responses for. This is a query parameter used to select responses.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
Sampling temperature. This is a query parameter used to select responses.
input_messages: Optional[DataSourceResponsesInputMessages]Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
class DataSourceResponsesInputMessagesTemplate: …
template: List[DataSourceResponsesInputMessagesTemplateTemplate]A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItem: …A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
content: DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentOutputText: …A text output from the model.
A text output from the model.
class DataSourceResponsesInputMessagesTemplateTemplateEvalItemContentInputImage: …An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
List[GraderInputItem]
sampling_params: Optional[DataSourceResponsesSamplingParams]
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
text: Optional[DataSourceResponsesSamplingParamsText]Configuration options for a text response from the model. Can be plain
text or structured JSON data. Learn more:
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The two categories of tools you can provide the model are:
- Built-in tools: Tools that are provided by OpenAI that extend the model’s capabilities, like web search or file search. Learn more about built-in tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. Learn more about function calling.
class FunctionTool: …Defines a function in your own code the model can choose to call. Learn more about function calling.
Defines a function in your own code the model can choose to call. Learn more about function calling.
class FileSearchTool: …A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
filters: Optional[Filters]A filter to apply.
A filter to apply.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
class CompoundFilter: …Combine multiple filters using and or or.
Combine multiple filters using and or or.
filters: List[Filter]Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
class ComparisonFilter: …A filter used to compare a specified attribute key to a given value using a defined comparison operation.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
The maximum number of results to return. This number should be between 1 and 50 inclusive.
ranking_options: Optional[RankingOptions]Ranking options for search.
Ranking options for search.
class ComputerTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class ComputerUsePreviewTool: …A tool that controls a virtual computer. Learn more about the computer tool.
A tool that controls a virtual computer. Learn more about the computer tool.
class WebSearchTool: …Search the Internet for sources related to the prompt. Learn more about the
web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
type: Literal["web_search", "web_search_2025_08_26"]The type of the web search tool. One of web_search or web_search_2025_08_26.
The type of the web search tool. One of web_search or web_search_2025_08_26.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The approximate location of the user.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
class Mcp: …Give the model access to additional tools via remote Model Context Protocol
(MCP) servers. Learn more about MCP.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
allowed_tools: Optional[McpAllowedTools]List of allowed tool names or a filter object.
List of allowed tool names or a filter object.
class McpAllowedToolsMcpToolFilter: …A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
connector_id: Optional[Literal["connector_dropbox", "connector_gmail", "connector_googlecalendar", 5 more]]Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox
- Gmail:
connector_gmail
- Google Calendar:
connector_googlecalendar
- Google Drive:
connector_googledrive
- Microsoft Teams:
connector_microsoftteams
- Outlook Calendar:
connector_outlookcalendar
- Outlook Email:
connector_outlookemail
- SharePoint:
connector_sharepoint
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
require_approval: Optional[McpRequireApproval]Specify which of the MCP server’s tools require approval.
Specify which of the MCP server’s tools require approval.
class McpRequireApprovalMcpToolApprovalFilter: …Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
Specify which of the MCP server’s tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
always: Optional[McpRequireApprovalMcpToolApprovalFilterAlways]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
never: Optional[McpRequireApprovalMcpToolApprovalFilterNever]A filter object to specify which tools are allowed.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
class CodeInterpreter: …A tool that runs Python code to help generate a response to a prompt.
A tool that runs Python code to help generate a response to a prompt.
container: CodeInterpreterContainerThe code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
class CodeInterpreterContainerCodeInterpreterToolAuto: …Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
memory_limit: Optional[Literal["1g", "4g", "16g", "64g"]]The memory limit for the code interpreter container.
The memory limit for the code interpreter container.
network_policy: Optional[CodeInterpreterContainerCodeInterpreterToolAutoNetworkPolicy]Network access policy for the container.
Network access policy for the container.
class ImageGeneration: …A tool that generates images using the GPT image models.
A tool that generates images using the GPT image models.
action: Optional[Literal["generate", "edit", "auto"]]Whether to generate a new image or edit an existing image. Default: auto.
Whether to generate a new image or edit an existing image. Default: auto.
background: Optional[Literal["transparent", "opaque", "auto"]]Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
input_fidelity: Optional[Literal["high", "low"]]Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
input_image_mask: Optional[ImageGenerationInputImageMask]Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
model: Optional[Union[str, Literal["gpt-image-1", "gpt-image-1-mini", "gpt-image-1.5"], null]]The image generation model to use. Default: gpt-image-1.
The image generation model to use. Default: gpt-image-1.
moderation: Optional[Literal["auto", "low"]]Moderation level for the generated image. Default: auto.
Moderation level for the generated image. Default: auto.
Compression level for the output image. Default: 100.
output_format: Optional[Literal["png", "webp", "jpeg"]]The output format of the generated image. One of png, webp, or
jpeg. Default: png.
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
class FunctionShellTool: …A tool that allows the model to execute shell commands.
A tool that allows the model to execute shell commands.
environment: Optional[Environment]
class ContainerAuto: …
network_policy: Optional[NetworkPolicy]Network access policy for the container.
Network access policy for the container.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class NamespaceTool: …Groups function/custom tools under a shared namespace.
Groups function/custom tools under a shared namespace.
tools: List[Tool]The function/custom tools available inside this namespace.
The function/custom tools available inside this namespace.
class CustomTool: …A custom tool that processes input using a specified format. Learn more about custom tools
A custom tool that processes input using a specified format. Learn more about custom tools
class ToolSearchTool: …Hosted or BYOT tool search configuration for deferred tools.
Hosted or BYOT tool search configuration for deferred tools.
class WebSearchPreviewTool: …This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
type: Literal["web_search_preview", "web_search_preview_2025_03_11"]The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
search_context_size: Optional[Literal["low", "medium", "high"]]High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
user_location: Optional[UserLocation]The user’s location.
The user’s location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
per_testing_criteria_results: List[PerTestingCriteriaResult]Results per testing criteria applied during the evaluation run.
Results per testing criteria applied during the evaluation run.
EvalsRunsOutput Items
Manage and run evals in the OpenAI platform.
Get eval run output items
Get an output item of an eval run
ModelsExpand Collapse
class OutputItemListResponse: …A schema representing an evaluation run output item.
A schema representing an evaluation run output item.
class OutputItemRetrieveResponse: …A schema representing an evaluation run output item.
A schema representing an evaluation run output item.