Methods
ModelsExpand Collapse
class DpoHyperparameters:The hyperparameters used for the DPO fine-tuning job.
The hyperparameters used for the DPO fine-tuning job.
Optional<BatchSize> batchSizeNumber of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Optional<Beta> betaThe beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.
The beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.
Optional<LearningRateMultiplier> learningRateMultiplierScaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Optional<NEpochs> nEpochsThe number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
class DpoMethod:Configuration for the DPO fine-tuning method.
Configuration for the DPO fine-tuning method.
The hyperparameters used for the DPO fine-tuning job.
class ReinforcementHyperparameters:The hyperparameters used for the reinforcement fine-tuning job.
The hyperparameters used for the reinforcement fine-tuning job.
Optional<BatchSize> batchSizeNumber of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Optional<ComputeMultiplier> computeMultiplierMultiplier on amount of compute used for exploring search space during training.
Multiplier on amount of compute used for exploring search space during training.
Optional<EvalInterval> evalIntervalThe number of training steps between evaluation runs.
The number of training steps between evaluation runs.
Optional<EvalSamples> evalSamplesNumber of evaluation samples to generate per training step.
Number of evaluation samples to generate per training step.
Optional<LearningRateMultiplier> learningRateMultiplierScaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Optional<NEpochs> nEpochsThe number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
Optional<ReasoningEffort> reasoningEffortLevel of reasoning effort.
Level of reasoning effort.
class ReinforcementMethod:Configuration for the reinforcement fine-tuning method.
Configuration for the reinforcement fine-tuning method.
Grader graderThe grader used for the fine-tuning job.
The grader used for the fine-tuning job.
class StringCheckGrader:A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
The input text. This may include template strings.
The name of the grader.
Operation operationThe string check operation to perform. One of eq, ne, like, or ilike.
The string check operation to perform. One of eq, ne, like, or ilike.
The reference text. This may include template strings.
The object type, which is always string_check.
class TextSimilarityGrader:A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
EvaluationMetric evaluationMetricThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The text being graded.
The name of the grader.
The text being graded against.
The type of grader.
class PythonGrader:A PythonGrader object that runs a python script on the input.
A PythonGrader object that runs a python script on the input.
The name of the grader.
The source code of the python script.
The object type, which is always python.
The image tag to use for the python script.
class ScoreModelGrader:A ScoreModelGrader object that uses a model to assign a score to the input.
A ScoreModelGrader object that uses a model to assign a score to the input.
List<Input> inputThe input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
Content contentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class ResponseInputText:A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
class OutputText:A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
class InputImage:An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
class ResponseInputAudio:An audio input to the model.
An audio input to the model.
InputAudio inputAudio
Base64-encoded audio data.
Format formatThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
List<EvalContentItem>
class ResponseInputText:A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
OutputText
The text output from the model.
The type of the output text. Always output_text.
InputImage
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
class ResponseInputAudio:An audio input to the model.
An audio input to the model.
InputAudio inputAudio
Base64-encoded audio data.
Format formatThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
Role roleThe role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The model to use for the evaluation.
The name of the grader.
The object type, which is always score_model.
The range of the score. Defaults to [0, 1].
Optional<SamplingParams> samplingParamsThe sampling parameters for the model.
The sampling parameters for the model.
The maximum number of tokens the grader model may generate in its response.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
A seed value to initialize the randomness, during sampling.
A higher temperature increases randomness in the outputs.
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
class MultiGrader:A MultiGrader object combines the output of multiple graders to produce a single score.
A MultiGrader object combines the output of multiple graders to produce a single score.
A formula to calculate the output based on grader results.
Graders gradersA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
class StringCheckGrader:A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
The input text. This may include template strings.
The name of the grader.
Operation operationThe string check operation to perform. One of eq, ne, like, or ilike.
The string check operation to perform. One of eq, ne, like, or ilike.
The reference text. This may include template strings.
The object type, which is always string_check.
class TextSimilarityGrader:A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
EvaluationMetric evaluationMetricThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The text being graded.
The name of the grader.
The text being graded against.
The type of grader.
class PythonGrader:A PythonGrader object that runs a python script on the input.
A PythonGrader object that runs a python script on the input.
The name of the grader.
The source code of the python script.
The object type, which is always python.
The image tag to use for the python script.
class ScoreModelGrader:A ScoreModelGrader object that uses a model to assign a score to the input.
A ScoreModelGrader object that uses a model to assign a score to the input.
List<Input> inputThe input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
Content contentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class ResponseInputText:A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
class OutputText:A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
class InputImage:An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
class ResponseInputAudio:An audio input to the model.
An audio input to the model.
InputAudio inputAudio
Base64-encoded audio data.
Format formatThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
List<EvalContentItem>
class ResponseInputText:A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
OutputText
The text output from the model.
The type of the output text. Always output_text.
InputImage
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
class ResponseInputAudio:An audio input to the model.
An audio input to the model.
InputAudio inputAudio
Base64-encoded audio data.
Format formatThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
Role roleThe role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The model to use for the evaluation.
The name of the grader.
The object type, which is always score_model.
The range of the score. Defaults to [0, 1].
Optional<SamplingParams> samplingParamsThe sampling parameters for the model.
The sampling parameters for the model.
The maximum number of tokens the grader model may generate in its response.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
A seed value to initialize the randomness, during sampling.
A higher temperature increases randomness in the outputs.
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
class LabelModelGrader:A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
List<Input> input
Content contentInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
class ResponseInputText:A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
class OutputText:A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
class InputImage:An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
class ResponseInputAudio:An audio input to the model.
An audio input to the model.
InputAudio inputAudio
Base64-encoded audio data.
Format formatThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
List<EvalContentItem>
class ResponseInputText:A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
OutputText
The text output from the model.
The type of the output text. Always output_text.
InputImage
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
class ResponseInputAudio:An audio input to the model.
An audio input to the model.
InputAudio inputAudio
Base64-encoded audio data.
Format formatThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
Role roleThe role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The labels to assign to each item in the evaluation.
The model to use for the evaluation. Must support structured outputs.
The name of the grader.
The labels that indicate a passing result. Must be a subset of labels.
The object type, which is always label_model.
The name of the grader.
The object type, which is always multi.
The hyperparameters used for the reinforcement fine-tuning job.
class SupervisedHyperparameters:The hyperparameters used for the fine-tuning job.
The hyperparameters used for the fine-tuning job.
Optional<BatchSize> batchSizeNumber of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Optional<LearningRateMultiplier> learningRateMultiplierScaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Optional<NEpochs> nEpochsThe number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
class SupervisedMethod:Configuration for the supervised fine-tuning method.
Configuration for the supervised fine-tuning method.
The hyperparameters used for the fine-tuning job.