Skip to content
Primary navigation

List batches

batches.list(BatchListParams**kwargs) -> SyncCursorPage[Batch]
GET/batches

List your organization's batches.

ParametersExpand Collapse
after: Optional[str]

A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.

limit: Optional[int]

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.

ReturnsExpand Collapse
class Batch: …
id: str
completion_window: str

The time frame within which the batch should be processed.

created_at: int

The Unix timestamp (in seconds) for when the batch was created.

endpoint: str

The OpenAI API endpoint used by the batch.

input_file_id: str

The ID of the input file for the batch.

object: Literal["batch"]

The object type, which is always batch.

status: Literal["validating", "failed", "in_progress", 5 more]

The current status of the batch.

Accepts one of the following:
"validating"
"failed"
"in_progress"
"finalizing"
"completed"
"expired"
"cancelling"
"cancelled"
cancelled_at: Optional[int]

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at: Optional[int]

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id: Optional[str]

The ID of the file containing the outputs of requests with errors.

errors: Optional[Errors]
data: Optional[List[BatchError]]
code: Optional[str]

An error code identifying the error type.

line: Optional[int]

The line number of the input file where the error occurred, if applicable.

message: Optional[str]

A human-readable message providing more details about the error.

param: Optional[str]

The name of the parameter that caused the error, if applicable.

object: Optional[str]

The object type, which is always list.

expired_at: Optional[int]

The Unix timestamp (in seconds) for when the batch expired.

expires_at: Optional[int]

The Unix timestamp (in seconds) for when the batch will expire.

failed_at: Optional[int]

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started processing.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: Optional[str]

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id: Optional[str]

The ID of the file containing the outputs of successfully executed requests.

request_counts: Optional[BatchRequestCounts]

The request counts for different statuses within the batch.

completed: int

Number of requests that have been completed successfully.

failed: int

Number of requests that have failed.

total: int

Total number of requests in the batch.

usage: Optional[BatchUsage]

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: int

The number of input tokens.

input_tokens_details: InputTokensDetails

A detailed breakdown of the input tokens.

cached_tokens: int

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: int

The number of output tokens.

output_tokens_details: OutputTokensDetails

A detailed breakdown of the output tokens.

reasoning_tokens: int

The number of reasoning tokens.

total_tokens: int

The total number of tokens used.

List batches

from openai import OpenAI
client = OpenAI()

client.batches.list()
{
  "object": "list",
  "data": [
    {
      "id": "batch_abc123",
      "object": "batch",
      "endpoint": "/v1/chat/completions",
      "errors": null,
      "input_file_id": "file-abc123",
      "completion_window": "24h",
      "status": "completed",
      "output_file_id": "file-cvaTdG",
      "error_file_id": "file-HOWS94",
      "created_at": 1711471533,
      "in_progress_at": 1711471538,
      "expires_at": 1711557933,
      "finalizing_at": 1711493133,
      "completed_at": 1711493163,
      "failed_at": null,
      "expired_at": null,
      "cancelling_at": null,
      "cancelled_at": null,
      "request_counts": {
        "total": 100,
        "completed": 95,
        "failed": 5
      },
      "metadata": {
        "customer_id": "user_123456789",
        "batch_description": "Nightly job",
      }
    },
    { ... },
  ],
  "first_id": "batch_abc123",
  "last_id": "batch_abc456",
  "has_more": true
}
Returns Examples
{
  "object": "list",
  "data": [
    {
      "id": "batch_abc123",
      "object": "batch",
      "endpoint": "/v1/chat/completions",
      "errors": null,
      "input_file_id": "file-abc123",
      "completion_window": "24h",
      "status": "completed",
      "output_file_id": "file-cvaTdG",
      "error_file_id": "file-HOWS94",
      "created_at": 1711471533,
      "in_progress_at": 1711471538,
      "expires_at": 1711557933,
      "finalizing_at": 1711493133,
      "completed_at": 1711493163,
      "failed_at": null,
      "expired_at": null,
      "cancelling_at": null,
      "cancelled_at": null,
      "request_counts": {
        "total": 100,
        "completed": 95,
        "failed": 5
      },
      "metadata": {
        "customer_id": "user_123456789",
        "batch_description": "Nightly job",
      }
    },
    { ... },
  ],
  "first_id": "batch_abc123",
  "last_id": "batch_abc456",
  "has_more": true
}