Skip to content

Batches

Create batch
Batch batches().create(BatchCreateParamsparams, RequestOptionsrequestOptions = RequestOptions.none())
POST/batches
Retrieve batch
Batch batches().retrieve(BatchRetrieveParamsparams = BatchRetrieveParams.none(), RequestOptionsrequestOptions = RequestOptions.none())
GET/batches/{batch_id}
Cancel batch
Batch batches().cancel(BatchCancelParamsparams = BatchCancelParams.none(), RequestOptionsrequestOptions = RequestOptions.none())
POST/batches/{batch_id}/cancel
List batches
BatchListPage batches().list(BatchListParamsparams = BatchListParams.none(), RequestOptionsrequestOptions = RequestOptions.none())
GET/batches
ModelsExpand Collapse
class Batch:
String id
String completionWindow

The time frame within which the batch should be processed.

long createdAt

The Unix timestamp (in seconds) for when the batch was created.

String endpoint

The OpenAI API endpoint used by the batch.

String inputFileId

The ID of the input file for the batch.

JsonValue; object_ "batch"constant"batch"constant

The object type, which is always batch.

Status status

The current status of the batch.

Accepts one of the following:
VALIDATING("validating")
FAILED("failed")
IN_PROGRESS("in_progress")
FINALIZING("finalizing")
COMPLETED("completed")
EXPIRED("expired")
CANCELLING("cancelling")
CANCELLED("cancelled")
Optional<Long> cancelledAt

The Unix timestamp (in seconds) for when the batch was cancelled.

Optional<Long> cancellingAt

The Unix timestamp (in seconds) for when the batch started cancelling.

Optional<Long> completedAt

The Unix timestamp (in seconds) for when the batch was completed.

Optional<String> errorFileId

The ID of the file containing the outputs of requests with errors.

Optional<Errors> errors
Optional<List<BatchError>> data
Optional<String> code

An error code identifying the error type.

Optional<Long> line

The line number of the input file where the error occurred, if applicable.

Optional<String> message

A human-readable message providing more details about the error.

Optional<String> param

The name of the parameter that caused the error, if applicable.

Optional<String> object_

The object type, which is always list.

Optional<Long> expiredAt

The Unix timestamp (in seconds) for when the batch expired.

Optional<Long> expiresAt

The Unix timestamp (in seconds) for when the batch will expire.

Optional<Long> failedAt

The Unix timestamp (in seconds) for when the batch failed.

Optional<Long> finalizingAt

The Unix timestamp (in seconds) for when the batch started finalizing.

Optional<Long> inProgressAt

The Unix timestamp (in seconds) for when the batch started processing.

Optional<Metadata> metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Optional<String> model

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Optional<String> outputFileId

The ID of the file containing the outputs of successfully executed requests.

Optional<BatchRequestCounts> requestCounts

The request counts for different statuses within the batch.

Optional<BatchUsage> usage

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

class BatchError:
Optional<String> code

An error code identifying the error type.

Optional<Long> line

The line number of the input file where the error occurred, if applicable.

Optional<String> message

A human-readable message providing more details about the error.

Optional<String> param

The name of the parameter that caused the error, if applicable.

class BatchRequestCounts:

The request counts for different statuses within the batch.

long completed

Number of requests that have been completed successfully.

long failed

Number of requests that have failed.

long total

Total number of requests in the batch.

class BatchUsage:

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

long inputTokens

The number of input tokens.

InputTokensDetails inputTokensDetails

A detailed breakdown of the input tokens.

long cachedTokens

The number of tokens that were retrieved from the cache. More on prompt caching.

long outputTokens

The number of output tokens.

OutputTokensDetails outputTokensDetails

A detailed breakdown of the output tokens.

long reasoningTokens

The number of reasoning tokens.

long totalTokens

The total number of tokens used.