Skip to content

List batches

batches.list(BatchListParams**kwargs) -> SyncCursorPage[Batch]
GET/batches

List your organization's batches.

ParametersExpand Collapse
after: Optional[str]

A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.

limit: Optional[int]

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.

ReturnsExpand Collapse
class Batch: …
id: str
completion_window: str

The time frame within which the batch should be processed.

created_at: int

The Unix timestamp (in seconds) for when the batch was created.

endpoint: str

The OpenAI API endpoint used by the batch.

input_file_id: str

The ID of the input file for the batch.

object: Literal["batch"]

The object type, which is always batch.

status: Literal["validating", "failed", "in_progress", 5 more]

The current status of the batch.

Accepts one of the following:
"validating"
"failed"
"in_progress"
"finalizing"
"completed"
"expired"
"cancelling"
"cancelled"
cancelled_at: Optional[int]

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at: Optional[int]

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id: Optional[str]

The ID of the file containing the outputs of requests with errors.

errors: Optional[Errors]
data: Optional[List[BatchError]]
code: Optional[str]

An error code identifying the error type.

line: Optional[int]

The line number of the input file where the error occurred, if applicable.

message: Optional[str]

A human-readable message providing more details about the error.

param: Optional[str]

The name of the parameter that caused the error, if applicable.

object: Optional[str]

The object type, which is always list.

expired_at: Optional[int]

The Unix timestamp (in seconds) for when the batch expired.

expires_at: Optional[int]

The Unix timestamp (in seconds) for when the batch will expire.

failed_at: Optional[int]

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started processing.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: Optional[str]

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id: Optional[str]

The ID of the file containing the outputs of successfully executed requests.

request_counts: Optional[BatchRequestCounts]

The request counts for different statuses within the batch.

completed: int

Number of requests that have been completed successfully.

failed: int

Number of requests that have failed.

total: int

Total number of requests in the batch.

usage: Optional[BatchUsage]

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: int

The number of input tokens.

input_tokens_details: InputTokensDetails

A detailed breakdown of the input tokens.

cached_tokens: int

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: int

The number of output tokens.

output_tokens_details: OutputTokensDetails

A detailed breakdown of the output tokens.

reasoning_tokens: int

The number of reasoning tokens.

total_tokens: int

The total number of tokens used.

List batches

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY"),  # This is the default and can be omitted
)
page = client.batches.list()
page = page.data[0]
print(page.id)
{
  "data": [
    {
      "id": "id",
      "completion_window": "completion_window",
      "created_at": 0,
      "endpoint": "endpoint",
      "input_file_id": "input_file_id",
      "object": "batch",
      "status": "validating",
      "cancelled_at": 0,
      "cancelling_at": 0,
      "completed_at": 0,
      "error_file_id": "error_file_id",
      "errors": {
        "data": [
          {
            "code": "code",
            "line": 0,
            "message": "message",
            "param": "param"
          }
        ],
        "object": "object"
      },
      "expired_at": 0,
      "expires_at": 0,
      "failed_at": 0,
      "finalizing_at": 0,
      "in_progress_at": 0,
      "metadata": {
        "foo": "string"
      },
      "model": "model",
      "output_file_id": "output_file_id",
      "request_counts": {
        "completed": 0,
        "failed": 0,
        "total": 0
      },
      "usage": {
        "input_tokens": 0,
        "input_tokens_details": {
          "cached_tokens": 0
        },
        "output_tokens": 0,
        "output_tokens_details": {
          "reasoning_tokens": 0
        },
        "total_tokens": 0
      }
    }
  ],
  "has_more": true,
  "object": "list",
  "first_id": "batch_abc123",
  "last_id": "batch_abc456"
}
Returns Examples
{
  "data": [
    {
      "id": "id",
      "completion_window": "completion_window",
      "created_at": 0,
      "endpoint": "endpoint",
      "input_file_id": "input_file_id",
      "object": "batch",
      "status": "validating",
      "cancelled_at": 0,
      "cancelling_at": 0,
      "completed_at": 0,
      "error_file_id": "error_file_id",
      "errors": {
        "data": [
          {
            "code": "code",
            "line": 0,
            "message": "message",
            "param": "param"
          }
        ],
        "object": "object"
      },
      "expired_at": 0,
      "expires_at": 0,
      "failed_at": 0,
      "finalizing_at": 0,
      "in_progress_at": 0,
      "metadata": {
        "foo": "string"
      },
      "model": "model",
      "output_file_id": "output_file_id",
      "request_counts": {
        "completed": 0,
        "failed": 0,
        "total": 0
      },
      "usage": {
        "input_tokens": 0,
        "input_tokens_details": {
          "cached_tokens": 0
        },
        "output_tokens": 0,
        "output_tokens_details": {
          "reasoning_tokens": 0
        },
        "total_tokens": 0
      }
    }
  ],
  "has_more": true,
  "object": "list",
  "first_id": "batch_abc123",
  "last_id": "batch_abc456"
}