Skip to content

Retrieve batch

batches.retrieve(batch_id) -> Batch { id, completion_window, created_at, 19 more }
GET/batches/{batch_id}

Retrieves a batch.

ParametersExpand Collapse
batch_id: String
ReturnsExpand Collapse
class Batch { id, completion_window, created_at, 19 more }
id: String
completion_window: String

The time frame within which the batch should be processed.

created_at: Integer

The Unix timestamp (in seconds) for when the batch was created.

endpoint: String

The OpenAI API endpoint used by the batch.

input_file_id: String

The ID of the input file for the batch.

object: :batch

The object type, which is always batch.

status: :validating | :failed | :in_progress | 5 more

The current status of the batch.

Accepts one of the following:
:validating
:failed
:in_progress
:finalizing
:completed
:expired
:cancelling
:cancelled
cancelled_at: Integer

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at: Integer

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at: Integer

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id: String

The ID of the file containing the outputs of requests with errors.

errors: { data, object}
data: Array[BatchError { code, line, message, param } ]
code: String

An error code identifying the error type.

line: Integer

The line number of the input file where the error occurred, if applicable.

message: String

A human-readable message providing more details about the error.

param: String

The name of the parameter that caused the error, if applicable.

object: String

The object type, which is always list.

expired_at: Integer

The Unix timestamp (in seconds) for when the batch expired.

expires_at: Integer

The Unix timestamp (in seconds) for when the batch will expire.

failed_at: Integer

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at: Integer

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at: Integer

The Unix timestamp (in seconds) for when the batch started processing.

metadata: Metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: String

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id: String

The ID of the file containing the outputs of successfully executed requests.

request_counts: BatchRequestCounts { completed, failed, total }

The request counts for different statuses within the batch.

completed: Integer

Number of requests that have been completed successfully.

failed: Integer

Number of requests that have failed.

total: Integer

Total number of requests in the batch.

usage: BatchUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: Integer

The number of input tokens.

input_tokens_details: { cached_tokens}

A detailed breakdown of the input tokens.

cached_tokens: Integer

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: Integer

The number of output tokens.

output_tokens_details: { reasoning_tokens}

A detailed breakdown of the output tokens.

reasoning_tokens: Integer

The number of reasoning tokens.

total_tokens: Integer

The total number of tokens used.

Retrieve batch

require "openai"

openai = OpenAI::Client.new(api_key: "My API Key")

batch = openai.batches.retrieve("batch_id")

puts(batch)
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}
Returns Examples
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}