Skip to content
Primary navigation

Methods

ModelsExpand Collapse
DpoHyperparameters { batch_size, beta, learning_rate_multiplier, n_epochs }

The hyperparameters used for the DPO fine-tuning job.

batch_size?: "auto" | number

Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.

One of the following:
"auto"
"auto"
number
beta?: "auto" | number

The beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.

One of the following:
"auto"
"auto"
number
learning_rate_multiplier?: "auto" | number

Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.

One of the following:
"auto"
"auto"
number
n_epochs?: "auto" | number

The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.

One of the following:
"auto"
"auto"
number
DpoMethod { hyperparameters }

Configuration for the DPO fine-tuning method.

hyperparameters?: DpoHyperparameters { batch_size, beta, learning_rate_multiplier, n_epochs }

The hyperparameters used for the DPO fine-tuning job.

ReinforcementHyperparameters { batch_size, compute_multiplier, eval_interval, 4 more }

The hyperparameters used for the reinforcement fine-tuning job.

batch_size?: "auto" | number

Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.

One of the following:
"auto"
"auto"
number
compute_multiplier?: "auto" | number

Multiplier on amount of compute used for exploring search space during training.

One of the following:
"auto"
"auto"
number
eval_interval?: "auto" | number

The number of training steps between evaluation runs.

One of the following:
"auto"
"auto"
number
eval_samples?: "auto" | number

Number of evaluation samples to generate per training step.

One of the following:
"auto"
"auto"
number
learning_rate_multiplier?: "auto" | number

Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.

One of the following:
"auto"
"auto"
number
n_epochs?: "auto" | number

The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.

One of the following:
"auto"
"auto"
number
reasoning_effort?: "default" | "low" | "medium" | "high"

Level of reasoning effort.

One of the following:
"default"
"low"
"medium"
"high"
ReinforcementMethod { grader, hyperparameters }

Configuration for the reinforcement fine-tuning method.

grader: StringCheckGrader { input, name, operation, 2 more } | TextSimilarityGrader { evaluation_metric, input, name, 2 more } | PythonGrader { name, source, type, image_tag } | 2 more

The grader used for the fine-tuning job.

One of the following:
StringCheckGrader { input, name, operation, 2 more }

A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.

input: string

The input text. This may include template strings.

name: string

The name of the grader.

operation: "eq" | "ne" | "like" | "ilike"

The string check operation to perform. One of eq, ne, like, or ilike.

One of the following:
"eq"
"ne"
"like"
"ilike"
reference: string

The reference text. This may include template strings.

type: "string_check"

The object type, which is always string_check.

TextSimilarityGrader { evaluation_metric, input, name, 2 more }

A TextSimilarityGrader object which grades text based on similarity metrics.

evaluation_metric: "cosine" | "fuzzy_match" | "bleu" | 8 more

The evaluation metric to use. One of cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.

One of the following:
"cosine"
"fuzzy_match"
"bleu"
"gleu"
"meteor"
"rouge_1"
"rouge_2"
"rouge_3"
"rouge_4"
"rouge_5"
"rouge_l"
input: string

The text being graded.

name: string

The name of the grader.

reference: string

The text being graded against.

type: "text_similarity"

The type of grader.

PythonGrader { name, source, type, image_tag }

A PythonGrader object that runs a python script on the input.

name: string

The name of the grader.

source: string

The source code of the python script.

type: "python"

The object type, which is always python.

image_tag?: string

The image tag to use for the python script.

ScoreModelGrader { input, model, name, 3 more }

A ScoreModelGrader object that uses a model to assign a score to the input.

input: Array<Input>

The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.

content: string | ResponseInputText { text, type } | OutputText { text, type } | 3 more

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

One of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

OutputText { text, type }

A text output from the model.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

InputImage { image_url, type, detail }

An image input block used within EvalItem content arrays.

image_url: string

The URL of the image input.

type: "input_image"

The type of the image input. Always input_image.

detail?: string

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

ResponseInputAudio { input_audio, type }

An audio input to the model.

input_audio: InputAudio { data, format }
data: string

Base64-encoded audio data.

format: "mp3" | "wav"

The format of the audio data. Currently supported formats are mp3 and wav.

One of the following:
"mp3"
"wav"
type: "input_audio"

The type of the input item. Always input_audio.

GraderInputs = Array<string | ResponseInputText { text, type } | OutputText { text, type } | 2 more>

A list of inputs, each of which may be either an input text, output text, input image, or input audio object.

One of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

OutputText { text, type }

A text output from the model.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

InputImage { image_url, type, detail }

An image input block used within EvalItem content arrays.

image_url: string

The URL of the image input.

type: "input_image"

The type of the image input. Always input_image.

detail?: string

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

ResponseInputAudio { input_audio, type }

An audio input to the model.

input_audio: InputAudio { data, format }
data: string

Base64-encoded audio data.

format: "mp3" | "wav"

The format of the audio data. Currently supported formats are mp3 and wav.

One of the following:
"mp3"
"wav"
type: "input_audio"

The type of the input item. Always input_audio.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

One of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

model: string

The model to use for the evaluation.

name: string

The name of the grader.

type: "score_model"

The object type, which is always score_model.

range?: Array<number>

The range of the score. Defaults to [0, 1].

sampling_params?: SamplingParams { max_completions_tokens, reasoning_effort, seed, 2 more }

The sampling parameters for the model.

max_completions_tokens?: number | null

The maximum number of tokens the grader model may generate in its response.

minimum1
reasoning_effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
seed?: number | null

A seed value to initialize the randomness, during sampling.

temperature?: number | null

A higher temperature increases randomness in the outputs.

top_p?: number | null

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

MultiGrader { calculate_output, graders, name, type }

A MultiGrader object combines the output of multiple graders to produce a single score.

calculate_output: string

A formula to calculate the output based on grader results.

graders: StringCheckGrader { input, name, operation, 2 more } | TextSimilarityGrader { evaluation_metric, input, name, 2 more } | PythonGrader { name, source, type, image_tag } | 2 more

A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.

One of the following:
StringCheckGrader { input, name, operation, 2 more }

A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.

input: string

The input text. This may include template strings.

name: string

The name of the grader.

operation: "eq" | "ne" | "like" | "ilike"

The string check operation to perform. One of eq, ne, like, or ilike.

One of the following:
"eq"
"ne"
"like"
"ilike"
reference: string

The reference text. This may include template strings.

type: "string_check"

The object type, which is always string_check.

TextSimilarityGrader { evaluation_metric, input, name, 2 more }

A TextSimilarityGrader object which grades text based on similarity metrics.

evaluation_metric: "cosine" | "fuzzy_match" | "bleu" | 8 more

The evaluation metric to use. One of cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.

One of the following:
"cosine"
"fuzzy_match"
"bleu"
"gleu"
"meteor"
"rouge_1"
"rouge_2"
"rouge_3"
"rouge_4"
"rouge_5"
"rouge_l"
input: string

The text being graded.

name: string

The name of the grader.

reference: string

The text being graded against.

type: "text_similarity"

The type of grader.

PythonGrader { name, source, type, image_tag }

A PythonGrader object that runs a python script on the input.

name: string

The name of the grader.

source: string

The source code of the python script.

type: "python"

The object type, which is always python.

image_tag?: string

The image tag to use for the python script.

ScoreModelGrader { input, model, name, 3 more }

A ScoreModelGrader object that uses a model to assign a score to the input.

input: Array<Input>

The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.

content: string | ResponseInputText { text, type } | OutputText { text, type } | 3 more

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

One of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

OutputText { text, type }

A text output from the model.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

InputImage { image_url, type, detail }

An image input block used within EvalItem content arrays.

image_url: string

The URL of the image input.

type: "input_image"

The type of the image input. Always input_image.

detail?: string

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

ResponseInputAudio { input_audio, type }

An audio input to the model.

input_audio: InputAudio { data, format }
data: string

Base64-encoded audio data.

format: "mp3" | "wav"

The format of the audio data. Currently supported formats are mp3 and wav.

One of the following:
"mp3"
"wav"
type: "input_audio"

The type of the input item. Always input_audio.

GraderInputs = Array<string | ResponseInputText { text, type } | OutputText { text, type } | 2 more>

A list of inputs, each of which may be either an input text, output text, input image, or input audio object.

One of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

OutputText { text, type }

A text output from the model.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

InputImage { image_url, type, detail }

An image input block used within EvalItem content arrays.

image_url: string

The URL of the image input.

type: "input_image"

The type of the image input. Always input_image.

detail?: string

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

ResponseInputAudio { input_audio, type }

An audio input to the model.

input_audio: InputAudio { data, format }
data: string

Base64-encoded audio data.

format: "mp3" | "wav"

The format of the audio data. Currently supported formats are mp3 and wav.

One of the following:
"mp3"
"wav"
type: "input_audio"

The type of the input item. Always input_audio.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

One of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

model: string

The model to use for the evaluation.

name: string

The name of the grader.

type: "score_model"

The object type, which is always score_model.

range?: Array<number>

The range of the score. Defaults to [0, 1].

sampling_params?: SamplingParams { max_completions_tokens, reasoning_effort, seed, 2 more }

The sampling parameters for the model.

max_completions_tokens?: number | null

The maximum number of tokens the grader model may generate in its response.

minimum1
reasoning_effort?: ReasoningEffort | null

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
seed?: number | null

A seed value to initialize the randomness, during sampling.

temperature?: number | null

A higher temperature increases randomness in the outputs.

top_p?: number | null

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

LabelModelGrader { input, labels, model, 3 more }

A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.

input: Array<Input>
content: string | ResponseInputText { text, type } | OutputText { text, type } | 3 more

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

One of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

OutputText { text, type }

A text output from the model.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

InputImage { image_url, type, detail }

An image input block used within EvalItem content arrays.

image_url: string

The URL of the image input.

type: "input_image"

The type of the image input. Always input_image.

detail?: string

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

ResponseInputAudio { input_audio, type }

An audio input to the model.

input_audio: InputAudio { data, format }
data: string

Base64-encoded audio data.

format: "mp3" | "wav"

The format of the audio data. Currently supported formats are mp3 and wav.

One of the following:
"mp3"
"wav"
type: "input_audio"

The type of the input item. Always input_audio.

GraderInputs = Array<string | ResponseInputText { text, type } | OutputText { text, type } | 2 more>

A list of inputs, each of which may be either an input text, output text, input image, or input audio object.

One of the following:
string
ResponseInputText { text, type }

A text input to the model.

text: string

The text input to the model.

type: "input_text"

The type of the input item. Always input_text.

OutputText { text, type }

A text output from the model.

text: string

The text output from the model.

type: "output_text"

The type of the output text. Always output_text.

InputImage { image_url, type, detail }

An image input block used within EvalItem content arrays.

image_url: string

The URL of the image input.

type: "input_image"

The type of the image input. Always input_image.

detail?: string

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

ResponseInputAudio { input_audio, type }

An audio input to the model.

input_audio: InputAudio { data, format }
data: string

Base64-encoded audio data.

format: "mp3" | "wav"

The format of the audio data. Currently supported formats are mp3 and wav.

One of the following:
"mp3"
"wav"
type: "input_audio"

The type of the input item. Always input_audio.

role: "user" | "assistant" | "system" | "developer"

The role of the message input. One of user, assistant, system, or developer.

One of the following:
"user"
"assistant"
"system"
"developer"
type?: "message"

The type of the message input. Always message.

labels: Array<string>

The labels to assign to each item in the evaluation.

model: string

The model to use for the evaluation. Must support structured outputs.

name: string

The name of the grader.

passing_labels: Array<string>

The labels that indicate a passing result. Must be a subset of labels.

type: "label_model"

The object type, which is always label_model.

name: string

The name of the grader.

type: "multi"

The object type, which is always multi.

hyperparameters?: ReinforcementHyperparameters { batch_size, compute_multiplier, eval_interval, 4 more }

The hyperparameters used for the reinforcement fine-tuning job.

SupervisedHyperparameters { batch_size, learning_rate_multiplier, n_epochs }

The hyperparameters used for the fine-tuning job.

batch_size?: "auto" | number

Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.

One of the following:
"auto"
"auto"
number
learning_rate_multiplier?: "auto" | number

Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.

One of the following:
"auto"
"auto"
number
n_epochs?: "auto" | number

The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.

One of the following:
"auto"
"auto"
number
SupervisedMethod { hyperparameters }

Configuration for the supervised fine-tuning method.

hyperparameters?: SupervisedHyperparameters { batch_size, learning_rate_multiplier, n_epochs }

The hyperparameters used for the fine-tuning job.