Skip to content

Jobs

Create fine-tuning job
POST/fine_tuning/jobs
List fine-tuning jobs
GET/fine_tuning/jobs
Retrieve fine-tuning job
GET/fine_tuning/jobs/{fine_tuning_job_id}
List fine-tuning events
GET/fine_tuning/jobs/{fine_tuning_job_id}/events
Cancel fine-tuning
POST/fine_tuning/jobs/{fine_tuning_job_id}/cancel
Pause fine-tuning
POST/fine_tuning/jobs/{fine_tuning_job_id}/pause
Resume fine-tuning
POST/fine_tuning/jobs/{fine_tuning_job_id}/resume
ModelsExpand Collapse
FineTuningJob = object { id, created_at, error, 16 more }

The fine_tuning.job object represents a fine-tuning job that has been created through the API.

id: string

The object identifier, which can be referenced in the API endpoints.

created_at: number

The Unix timestamp (in seconds) for when the fine-tuning job was created.

error: object { code, message, param }

For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.

code: string

A machine-readable error code.

message: string

A human-readable error message.

param: string

The parameter that was invalid, usually training_file or validation_file. This field will be null if the failure was not parameter-specific.

fine_tuned_model: string

The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.

finished_at: number

The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.

hyperparameters: object { batch_size, learning_rate_multiplier, n_epochs }

The hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.

batch_size: optional "auto" or number

Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.

Accepts one of the following:
UnionMember0 = "auto"
UnionMember1 = number
learning_rate_multiplier: optional "auto" or number

Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.

Accepts one of the following:
UnionMember0 = "auto"
UnionMember1 = number
n_epochs: optional "auto" or number

The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.

Accepts one of the following:
UnionMember0 = "auto"
UnionMember1 = number
model: string

The base model that is being fine-tuned.

object: "fine_tuning.job"

The object type, which is always "fine_tuning.job".

organization_id: string

The organization that owns the fine-tuning job.

result_files: array of string

The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files API.

seed: number

The seed used for the fine-tuning job.

status: "validating_files" or "queued" or "running" or 3 more

The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.

Accepts one of the following:
"validating_files"
"queued"
"running"
"succeeded"
"failed"
"cancelled"
trained_tokens: number

The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.

training_file: string

The file ID used for training. You can retrieve the training data with the Files API.

validation_file: string

The file ID used for validation. You can retrieve the validation results with the Files API.

estimated_finish: optional number

The Unix timestamp (in seconds) for when the fine-tuning job is estimated to finish. The value will be null if the fine-tuning job is not running.

integrations: optional array of FineTuningJobWandbIntegrationObject { type, wandb }

A list of integrations to enable for this fine-tuning job.

type: "wandb"

The type of the integration being enabled for the fine-tuning job

wandb: FineTuningJobWandbIntegration { project, entity, name, tags }

The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.

metadata: optional Metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

method: optional object { type, dpo, reinforcement, supervised }

The method used for fine-tuning.

type: "supervised" or "dpo" or "reinforcement"

The type of method. Is either supervised, dpo, or reinforcement.

Accepts one of the following:
"supervised"
"dpo"
"reinforcement"
dpo: optional DpoMethod { hyperparameters }

Configuration for the DPO fine-tuning method.

reinforcement: optional ReinforcementMethod { grader, hyperparameters }

Configuration for the reinforcement fine-tuning method.

supervised: optional SupervisedMethod { hyperparameters }

Configuration for the supervised fine-tuning method.

FineTuningJobEvent = object { id, created_at, level, 4 more }

Fine-tuning job event object

id: string

The object identifier.

created_at: number

The Unix timestamp (in seconds) for when the fine-tuning job was created.

level: "info" or "warn" or "error"

The log level of the event.

Accepts one of the following:
"info"
"warn"
"error"
message: string

The message of the event.

object: "fine_tuning.job.event"

The object type, which is always "fine_tuning.job.event".

data: optional unknown

The data associated with the event.

type: optional "message" or "metrics"

The type of event.

Accepts one of the following:
"message"
"metrics"
FineTuningJobWandbIntegration = object { project, entity, name, tags }

The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.

project: string

The name of the project that the new run will be created under.

entity: optional string

The entity to use for the run. This allows you to set the team or username of the WandB user that you would like associated with the run. If not set, the default entity for the registered WandB API key is used.

name: optional string

A display name to set for the run. If not set, we will use the Job ID as the name.

tags: optional array of string

A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some default tags are generated by OpenAI: "openai/finetune", "openai/{base-model}", "openai/{ftjob-abcdef}".

FineTuningJobWandbIntegrationObject = object { type, wandb }
type: "wandb"

The type of the integration being enabled for the fine-tuning job

wandb: FineTuningJobWandbIntegration { project, entity, name, tags }

The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.

JobsCheckpoints

List fine-tuning checkpoints
GET/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints
ModelsExpand Collapse
FineTuningJobCheckpoint = object { id, created_at, fine_tuned_model_checkpoint, 4 more }

The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.

id: string

The checkpoint identifier, which can be referenced in the API endpoints.

created_at: number

The Unix timestamp (in seconds) for when the checkpoint was created.

fine_tuned_model_checkpoint: string

The name of the fine-tuned checkpoint model that is created.

fine_tuning_job_id: string

The name of the fine-tuning job that this checkpoint was created from.

metrics: object { full_valid_loss, full_valid_mean_token_accuracy, step, 4 more }

Metrics at the step number during the fine-tuning job.

full_valid_loss: optional number
full_valid_mean_token_accuracy: optional number
step: optional number
train_loss: optional number
train_mean_token_accuracy: optional number
valid_loss: optional number
valid_mean_token_accuracy: optional number
object: "fine_tuning.job.checkpoint"

The object type, which is always "fine_tuning.job.checkpoint".

step_number: number

The step number that the checkpoint was created at.