Fine Tuning
Fine TuningMethods
ModelsExpand Collapse
type DpoHyperparametersResp struct{…}The hyperparameters used for the DPO fine-tuning job.
The hyperparameters used for the DPO fine-tuning job.
BatchSize DpoHyperparametersBatchSizeUnionRespoptionalNumber of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Beta DpoHyperparametersBetaUnionRespoptionalThe beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.
The beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.
LearningRateMultiplier DpoHyperparametersLearningRateMultiplierUnionRespoptionalScaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
NEpochs DpoHyperparametersNEpochsUnionRespoptionalThe number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
type DpoMethod struct{…}Configuration for the DPO fine-tuning method.
Configuration for the DPO fine-tuning method.
The hyperparameters used for the DPO fine-tuning job.
type ReinforcementHyperparametersResp struct{…}The hyperparameters used for the reinforcement fine-tuning job.
The hyperparameters used for the reinforcement fine-tuning job.
BatchSize ReinforcementHyperparametersBatchSizeUnionRespoptionalNumber of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
ComputeMultiplier ReinforcementHyperparametersComputeMultiplierUnionRespoptionalMultiplier on amount of compute used for exploring search space during training.
Multiplier on amount of compute used for exploring search space during training.
EvalInterval ReinforcementHyperparametersEvalIntervalUnionRespoptionalThe number of training steps between evaluation runs.
The number of training steps between evaluation runs.
EvalSamples ReinforcementHyperparametersEvalSamplesUnionRespoptionalNumber of evaluation samples to generate per training step.
Number of evaluation samples to generate per training step.
LearningRateMultiplier ReinforcementHyperparametersLearningRateMultiplierUnionRespoptionalScaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
NEpochs ReinforcementHyperparametersNEpochsUnionRespoptionalThe number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
ReasoningEffort ReinforcementHyperparametersReasoningEffortoptionalLevel of reasoning effort.
Level of reasoning effort.
type ReinforcementMethod struct{…}Configuration for the reinforcement fine-tuning method.
Configuration for the reinforcement fine-tuning method.
Grader ReinforcementMethodGraderUnionThe grader used for the fine-tuning job.
The grader used for the fine-tuning job.
type StringCheckGrader struct{…}A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
The input text. This may include template strings.
The name of the grader.
Operation StringCheckGraderOperationThe string check operation to perform. One of eq, ne, like, or ilike.
The string check operation to perform. One of eq, ne, like, or ilike.
The reference text. This may include template strings.
The object type, which is always string_check.
type TextSimilarityGrader struct{…}A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
EvaluationMetric TextSimilarityGraderEvaluationMetricThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The text being graded.
The name of the grader.
The text being graded against.
The type of grader.
type PythonGrader struct{…}A PythonGrader object that runs a python script on the input.
A PythonGrader object that runs a python script on the input.
The name of the grader.
The source code of the python script.
The object type, which is always python.
The image tag to use for the python script.
type ScoreModelGrader struct{…}A ScoreModelGrader object that uses a model to assign a score to the input.
A ScoreModelGrader object that uses a model to assign a score to the input.
Input []ScoreModelGraderInputThe input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
Content ScoreModelGraderInputContentUnionInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
type ResponseInputText struct{…}A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
type ScoreModelGraderInputContentOutputText struct{…}A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
type ScoreModelGraderInputContentInputImage struct{…}An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
type ResponseInputAudio struct{…}An audio input to the model.
An audio input to the model.
InputAudio ResponseInputAudioInputAudio
Base64-encoded audio data.
Format stringThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
type GraderInputs []GraderInputUnionA list of inputs, each of which may be either an input text, output text, input
image, or input audio object.
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
type ResponseInputText struct{…}A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
type GraderInputOutputText struct{…}A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
type GraderInputInputImage struct{…}An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
type ResponseInputAudio struct{…}An audio input to the model.
An audio input to the model.
InputAudio ResponseInputAudioInputAudio
Base64-encoded audio data.
Format stringThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
Role stringThe role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The model to use for the evaluation.
The name of the grader.
The object type, which is always score_model.
The range of the score. Defaults to [0, 1].
SamplingParams ScoreModelGraderSamplingParamsoptionalThe sampling parameters for the model.
The sampling parameters for the model.
The maximum number of tokens the grader model may generate in its response.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
A seed value to initialize the randomness, during sampling.
A higher temperature increases randomness in the outputs.
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
type MultiGrader struct{…}A MultiGrader object combines the output of multiple graders to produce a single score.
A MultiGrader object combines the output of multiple graders to produce a single score.
A formula to calculate the output based on grader results.
Graders MultiGraderGradersUnionA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
type StringCheckGrader struct{…}A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
The input text. This may include template strings.
The name of the grader.
Operation StringCheckGraderOperationThe string check operation to perform. One of eq, ne, like, or ilike.
The string check operation to perform. One of eq, ne, like, or ilike.
The reference text. This may include template strings.
The object type, which is always string_check.
type TextSimilarityGrader struct{…}A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
EvaluationMetric TextSimilarityGraderEvaluationMetricThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
The text being graded.
The name of the grader.
The text being graded against.
The type of grader.
type PythonGrader struct{…}A PythonGrader object that runs a python script on the input.
A PythonGrader object that runs a python script on the input.
The name of the grader.
The source code of the python script.
The object type, which is always python.
The image tag to use for the python script.
type ScoreModelGrader struct{…}A ScoreModelGrader object that uses a model to assign a score to the input.
A ScoreModelGrader object that uses a model to assign a score to the input.
Input []ScoreModelGraderInputThe input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings.
Content ScoreModelGraderInputContentUnionInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
type ResponseInputText struct{…}A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
type ScoreModelGraderInputContentOutputText struct{…}A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
type ScoreModelGraderInputContentInputImage struct{…}An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
type ResponseInputAudio struct{…}An audio input to the model.
An audio input to the model.
InputAudio ResponseInputAudioInputAudio
Base64-encoded audio data.
Format stringThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
type GraderInputs []GraderInputUnionA list of inputs, each of which may be either an input text, output text, input
image, or input audio object.
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
type ResponseInputText struct{…}A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
type GraderInputOutputText struct{…}A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
type GraderInputInputImage struct{…}An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
type ResponseInputAudio struct{…}An audio input to the model.
An audio input to the model.
InputAudio ResponseInputAudioInputAudio
Base64-encoded audio data.
Format stringThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
Role stringThe role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The model to use for the evaluation.
The name of the grader.
The object type, which is always score_model.
The range of the score. Defaults to [0, 1].
SamplingParams ScoreModelGraderSamplingParamsoptionalThe sampling parameters for the model.
The sampling parameters for the model.
The maximum number of tokens the grader model may generate in its response.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
A seed value to initialize the randomness, during sampling.
A higher temperature increases randomness in the outputs.
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
type LabelModelGrader struct{…}A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
Input []LabelModelGraderInput
Content LabelModelGraderInputContentUnionInputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
type ResponseInputText struct{…}A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
type LabelModelGraderInputContentOutputText struct{…}A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
type LabelModelGraderInputContentInputImage struct{…}An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
type ResponseInputAudio struct{…}An audio input to the model.
An audio input to the model.
InputAudio ResponseInputAudioInputAudio
Base64-encoded audio data.
Format stringThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
type GraderInputs []GraderInputUnionA list of inputs, each of which may be either an input text, output text, input
image, or input audio object.
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
type ResponseInputText struct{…}A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
type GraderInputOutputText struct{…}A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
type GraderInputInputImage struct{…}An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
type ResponseInputAudio struct{…}An audio input to the model.
An audio input to the model.
InputAudio ResponseInputAudioInputAudio
Base64-encoded audio data.
Format stringThe format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
Role stringThe role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The labels to assign to each item in the evaluation.
The model to use for the evaluation. Must support structured outputs.
The name of the grader.
The labels that indicate a passing result. Must be a subset of labels.
The object type, which is always label_model.
The name of the grader.
The object type, which is always multi.
The hyperparameters used for the reinforcement fine-tuning job.
type SupervisedHyperparametersResp struct{…}The hyperparameters used for the fine-tuning job.
The hyperparameters used for the fine-tuning job.
BatchSize SupervisedHyperparametersBatchSizeUnionRespoptionalNumber of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
LearningRateMultiplier SupervisedHyperparametersLearningRateMultiplierUnionRespoptionalScaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
NEpochs SupervisedHyperparametersNEpochsUnionRespoptionalThe number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
type SupervisedMethod struct{…}Configuration for the supervised fine-tuning method.
Configuration for the supervised fine-tuning method.
The hyperparameters used for the fine-tuning job.
Fine TuningJobs
Create fine-tuning job
List fine-tuning jobs
Retrieve fine-tuning job
List fine-tuning events
Cancel fine-tuning
Pause fine-tuning
Resume fine-tuning
ModelsExpand Collapse
type FineTuningJob struct{…}The fine_tuning.job object represents a fine-tuning job that has been created through the API.
The fine_tuning.job object represents a fine-tuning job that has been created through the API.
The object identifier, which can be referenced in the API endpoints.
The Unix timestamp (in seconds) for when the fine-tuning job was created.
Error FineTuningJobErrorFor fine-tuning jobs that have failed, this will contain more information on the cause of the failure.
For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.
A machine-readable error code.
A human-readable error message.
The parameter that was invalid, usually training_file or validation_file. This field will be null if the failure was not parameter-specific.
The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.
The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.
Hyperparameters FineTuningJobHyperparametersThe hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.
The hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.
BatchSize FineTuningJobHyperparametersBatchSizeUnionoptionalNumber of examples in each batch. A larger batch size means that model parameters
are updated less frequently, but with lower variance.
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
LearningRateMultiplier FineTuningJobHyperparametersLearningRateMultiplierUnionoptionalScaling factor for the learning rate. A smaller learning rate may be useful to avoid
overfitting.
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
NEpochs FineTuningJobHyperparametersNEpochsUnionoptionalThe number of epochs to train the model for. An epoch refers to one full cycle
through the training dataset.
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
The base model that is being fine-tuned.
The object type, which is always "fine_tuning.job".
The organization that owns the fine-tuning job.
The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files API.
The seed used for the fine-tuning job.
Status FineTuningJobStatusThe current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.
The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.
The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.
The file ID used for training. You can retrieve the training data with the Files API.
The file ID used for validation. You can retrieve the validation results with the Files API.
The Unix timestamp (in seconds) for when the fine-tuning job is estimated to finish. The value will be null if the fine-tuning job is not running.
A list of integrations to enable for this fine-tuning job.
A list of integrations to enable for this fine-tuning job.
The type of the integration being enabled for the fine-tuning job
The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Method FineTuningJobMethodoptionalThe method used for fine-tuning.
The method used for fine-tuning.
Type stringThe type of method. Is either supervised, dpo, or reinforcement.
The type of method. Is either supervised, dpo, or reinforcement.
Configuration for the DPO fine-tuning method.
Configuration for the reinforcement fine-tuning method.
Configuration for the supervised fine-tuning method.
type FineTuningJobEvent struct{…}Fine-tuning job event object
Fine-tuning job event object
The object identifier.
The Unix timestamp (in seconds) for when the fine-tuning job was created.
Level FineTuningJobEventLevelThe log level of the event.
The log level of the event.
The message of the event.
The object type, which is always "fine_tuning.job.event".
The data associated with the event.
Type FineTuningJobEventTypeoptionalThe type of event.
The type of event.
type FineTuningJobWandbIntegration struct{…}The settings for your integration with Weights and Biases. This payload specifies the project that
metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags
to your run, and set a default entity (team, username, etc) to be associated with your run.
The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
The name of the project that the new run will be created under.
The entity to use for the run. This allows you to set the team or username of the WandB user that you would like associated with the run. If not set, the default entity for the registered WandB API key is used.
A display name to set for the run. If not set, we will use the Job ID as the name.
A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some default tags are generated by OpenAI: "openai/finetune", "openai/{base-model}", "openai/{ftjob-abcdef}".
type FineTuningJobWandbIntegrationObject struct{…}
The type of the integration being enabled for the fine-tuning job
The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
Fine TuningJobsCheckpoints
List fine-tuning checkpoints
ModelsExpand Collapse
type FineTuningJobCheckpoint struct{…}The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
The checkpoint identifier, which can be referenced in the API endpoints.
The Unix timestamp (in seconds) for when the checkpoint was created.
The name of the fine-tuned checkpoint model that is created.
The name of the fine-tuning job that this checkpoint was created from.
Metrics FineTuningJobCheckpointMetricsMetrics at the step number during the fine-tuning job.
Metrics at the step number during the fine-tuning job.
The object type, which is always "fine_tuning.job.checkpoint".
The step number that the checkpoint was created at.