Skip to content

List batches

client.Batches.List(ctx, query) (*CursorPage[Batch], error)
GET/batches

List your organization's batches.

ParametersExpand Collapse
query BatchListParams
After param.Field[string]optional

A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.

Limit param.Field[int64]optional

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.

ReturnsExpand Collapse
type Batch struct{…}
ID string
CompletionWindow string

The time frame within which the batch should be processed.

CreatedAt int64

The Unix timestamp (in seconds) for when the batch was created.

Endpoint string

The OpenAI API endpoint used by the batch.

InputFileID string

The ID of the input file for the batch.

Object Batch

The object type, which is always batch.

Status BatchStatus

The current status of the batch.

Accepts one of the following:
const BatchStatusValidating BatchStatus = "validating"
const BatchStatusFailed BatchStatus = "failed"
const BatchStatusInProgress BatchStatus = "in_progress"
const BatchStatusFinalizing BatchStatus = "finalizing"
const BatchStatusCompleted BatchStatus = "completed"
const BatchStatusExpired BatchStatus = "expired"
const BatchStatusCancelling BatchStatus = "cancelling"
const BatchStatusCancelled BatchStatus = "cancelled"
CancelledAt int64optional

The Unix timestamp (in seconds) for when the batch was cancelled.

CancellingAt int64optional

The Unix timestamp (in seconds) for when the batch started cancelling.

CompletedAt int64optional

The Unix timestamp (in seconds) for when the batch was completed.

ErrorFileID stringoptional

The ID of the file containing the outputs of requests with errors.

Errors BatchErrorsoptional
Data []BatchErroroptional
Code stringoptional

An error code identifying the error type.

Line int64optional

The line number of the input file where the error occurred, if applicable.

Message stringoptional

A human-readable message providing more details about the error.

Param stringoptional

The name of the parameter that caused the error, if applicable.

Object stringoptional

The object type, which is always list.

ExpiredAt int64optional

The Unix timestamp (in seconds) for when the batch expired.

ExpiresAt int64optional

The Unix timestamp (in seconds) for when the batch will expire.

FailedAt int64optional

The Unix timestamp (in seconds) for when the batch failed.

FinalizingAt int64optional

The Unix timestamp (in seconds) for when the batch started finalizing.

InProgressAt int64optional

The Unix timestamp (in seconds) for when the batch started processing.

Metadata Metadataoptional

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model stringoptional

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

OutputFileID stringoptional

The ID of the file containing the outputs of successfully executed requests.

RequestCounts BatchRequestCountsoptional

The request counts for different statuses within the batch.

Completed int64

Number of requests that have been completed successfully.

Failed int64

Number of requests that have failed.

Total int64

Total number of requests in the batch.

Usage BatchUsageoptional

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

InputTokens int64

The number of input tokens.

InputTokensDetails BatchUsageInputTokensDetails

A detailed breakdown of the input tokens.

CachedTokens int64

The number of tokens that were retrieved from the cache. More on prompt caching.

OutputTokens int64

The number of output tokens.

OutputTokensDetails BatchUsageOutputTokensDetails

A detailed breakdown of the output tokens.

ReasoningTokens int64

The number of reasoning tokens.

TotalTokens int64

The total number of tokens used.

List batches

package main

import (
  "context"
  "fmt"

  "github.com/openai/openai-go"
  "github.com/openai/openai-go/option"
)

func main() {
  client := openai.NewClient(
    option.WithAPIKey("My API Key"),
  )
  page, err := client.Batches.List(context.TODO(), openai.BatchListParams{

  })
  if err != nil {
    panic(err.Error())
  }
  fmt.Printf("%+v\n", page)
}
{
  "data": [
    {
      "id": "id",
      "completion_window": "completion_window",
      "created_at": 0,
      "endpoint": "endpoint",
      "input_file_id": "input_file_id",
      "object": "batch",
      "status": "validating",
      "cancelled_at": 0,
      "cancelling_at": 0,
      "completed_at": 0,
      "error_file_id": "error_file_id",
      "errors": {
        "data": [
          {
            "code": "code",
            "line": 0,
            "message": "message",
            "param": "param"
          }
        ],
        "object": "object"
      },
      "expired_at": 0,
      "expires_at": 0,
      "failed_at": 0,
      "finalizing_at": 0,
      "in_progress_at": 0,
      "metadata": {
        "foo": "string"
      },
      "model": "model",
      "output_file_id": "output_file_id",
      "request_counts": {
        "completed": 0,
        "failed": 0,
        "total": 0
      },
      "usage": {
        "input_tokens": 0,
        "input_tokens_details": {
          "cached_tokens": 0
        },
        "output_tokens": 0,
        "output_tokens_details": {
          "reasoning_tokens": 0
        },
        "total_tokens": 0
      }
    }
  ],
  "has_more": true,
  "object": "list",
  "first_id": "batch_abc123",
  "last_id": "batch_abc456"
}
Returns Examples
{
  "data": [
    {
      "id": "id",
      "completion_window": "completion_window",
      "created_at": 0,
      "endpoint": "endpoint",
      "input_file_id": "input_file_id",
      "object": "batch",
      "status": "validating",
      "cancelled_at": 0,
      "cancelling_at": 0,
      "completed_at": 0,
      "error_file_id": "error_file_id",
      "errors": {
        "data": [
          {
            "code": "code",
            "line": 0,
            "message": "message",
            "param": "param"
          }
        ],
        "object": "object"
      },
      "expired_at": 0,
      "expires_at": 0,
      "failed_at": 0,
      "finalizing_at": 0,
      "in_progress_at": 0,
      "metadata": {
        "foo": "string"
      },
      "model": "model",
      "output_file_id": "output_file_id",
      "request_counts": {
        "completed": 0,
        "failed": 0,
        "total": 0
      },
      "usage": {
        "input_tokens": 0,
        "input_tokens_details": {
          "cached_tokens": 0
        },
        "output_tokens": 0,
        "output_tokens_details": {
          "reasoning_tokens": 0
        },
        "total_tokens": 0
      }
    }
  ],
  "has_more": true,
  "object": "list",
  "first_id": "batch_abc123",
  "last_id": "batch_abc456"
}