Batches
Create batch
Retrieve batch
Cancel batch
List batches
ModelsExpand Collapse
Batch = object { id, completion_window, created_at, 19 more }
The time frame within which the batch should be processed.
The Unix timestamp (in seconds) for when the batch was created.
The OpenAI API endpoint used by the batch.
The ID of the input file for the batch.
The object type, which is always batch.
status: "validating" or "failed" or "in_progress" or 5 moreThe current status of the batch.
The current status of the batch.
The Unix timestamp (in seconds) for when the batch was cancelled.
The Unix timestamp (in seconds) for when the batch started cancelling.
The Unix timestamp (in seconds) for when the batch was completed.
The ID of the file containing the outputs of requests with errors.
errors: optional object { data, object }
data: optional array of object { code, line, message, param }
An error code identifying the error type.
The line number of the input file where the error occurred, if applicable.
A human-readable message providing more details about the error.
The name of the parameter that caused the error, if applicable.
The object type, which is always list.
The Unix timestamp (in seconds) for when the batch expired.
The Unix timestamp (in seconds) for when the batch will expire.
The Unix timestamp (in seconds) for when the batch failed.
The Unix timestamp (in seconds) for when the batch started finalizing.
The Unix timestamp (in seconds) for when the batch started processing.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model
guide to browse and compare available models.
The ID of the file containing the outputs of successfully executed requests.
request_counts: optional object { completed, failed, total } The request counts for different statuses within the batch.
The request counts for different statuses within the batch.
Number of requests that have been completed successfully.
Number of requests that have failed.
Total number of requests in the batch.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.
BatchUsage = object { input_tokens, input_tokens_details, output_tokens, 2 more } Represents token usage details including input tokens, output tokens, a
breakdown of output tokens, and the total tokens used. Only populated on
batches created after September 7, 2025.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.
The number of input tokens.
input_tokens_details: object { cached_tokens } A detailed breakdown of the input tokens.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
The number of output tokens.
output_tokens_details: object { reasoning_tokens } A detailed breakdown of the output tokens.
A detailed breakdown of the output tokens.
The number of reasoning tokens.
The total number of tokens used.