Skip to content
Primary navigation

Create batch

client.batches.create(BatchCreateParams { completion_window, endpoint, input_file_id, 2 more } body, RequestOptionsoptions?): Batch { id, completion_window, created_at, 19 more }
POST/batches

Creates and executes a batch from an uploaded file of requests

ParametersExpand Collapse
body: BatchCreateParams { completion_window, endpoint, input_file_id, 2 more }
completion_window: "24h"

The time frame within which the batch should be processed. Currently only 24h is supported.

endpoint: "/v1/responses" | "/v1/chat/completions" | "/v1/embeddings" | 5 more

The endpoint to be used for all requests in the batch. Currently /v1/responses, /v1/chat/completions, /v1/embeddings, /v1/completions, /v1/moderations, /v1/images/generations, /v1/images/edits, and /v1/videos are supported. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch.

One of the following:
"/v1/responses"
"/v1/chat/completions"
"/v1/embeddings"
"/v1/completions"
"/v1/moderations"
"/v1/images/generations"
"/v1/images/edits"
"/v1/videos"
input_file_id: string

The ID of an uploaded file that contains requests for the new batch.

See upload file for how to upload a file.

Your input file must be formatted as a JSONL file, and must be uploaded with the purpose batch. The file can contain up to 50,000 requests, and can be up to 200 MB in size.

metadata?: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

output_expires_after?: OutputExpiresAfter

The expiration policy for the output and/or error file that are generated for a batch.

anchor: "created_at"

Anchor timestamp after which the expiration policy applies. Supported anchors: created_at. Note that the anchor is the file creation time, not the time the batch is created.

seconds: number

The number of seconds after the anchor time that the file will expire. Must be between 3600 (1 hour) and 2592000 (30 days).

minimum3600
maximum2592000
ReturnsExpand Collapse
Batch { id, completion_window, created_at, 19 more }
id: string
completion_window: string

The time frame within which the batch should be processed.

created_at: number

The Unix timestamp (in seconds) for when the batch was created.

endpoint: string

The OpenAI API endpoint used by the batch.

input_file_id: string

The ID of the input file for the batch.

object: "batch"

The object type, which is always batch.

status: "validating" | "failed" | "in_progress" | 5 more

The current status of the batch.

One of the following:
"validating"
"failed"
"in_progress"
"finalizing"
"completed"
"expired"
"cancelling"
"cancelled"
cancelled_at?: number

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at?: number

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at?: number

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id?: string

The ID of the file containing the outputs of requests with errors.

errors?: Errors { data, object }
data?: Array<BatchError { code, line, message, param } >
code?: string

An error code identifying the error type.

line?: number | null

The line number of the input file where the error occurred, if applicable.

message?: string

A human-readable message providing more details about the error.

param?: string | null

The name of the parameter that caused the error, if applicable.

object?: string

The object type, which is always list.

expired_at?: number

The Unix timestamp (in seconds) for when the batch expired.

expires_at?: number

The Unix timestamp (in seconds) for when the batch will expire.

failed_at?: number

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at?: number

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at?: number

The Unix timestamp (in seconds) for when the batch started processing.

metadata?: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model?: string

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id?: string

The ID of the file containing the outputs of successfully executed requests.

request_counts?: BatchRequestCounts { completed, failed, total }

The request counts for different statuses within the batch.

completed: number

Number of requests that have been completed successfully.

failed: number

Number of requests that have failed.

total: number

Total number of requests in the batch.

usage?: BatchUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Create batch

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const batch = await openai.batches.create({
    input_file_id: "file-abc123",
    endpoint: "/v1/chat/completions",
    completion_window: "24h"
  });

  console.log(batch);
}

main();
{
  "id": "batch_abc123",
  "object": "batch",
  "endpoint": "/v1/chat/completions",
  "errors": null,
  "input_file_id": "file-abc123",
  "completion_window": "24h",
  "status": "validating",
  "output_file_id": null,
  "error_file_id": null,
  "created_at": 1711471533,
  "in_progress_at": null,
  "expires_at": null,
  "finalizing_at": null,
  "completed_at": null,
  "failed_at": null,
  "expired_at": null,
  "cancelling_at": null,
  "cancelled_at": null,
  "request_counts": {
    "total": 0,
    "completed": 0,
    "failed": 0
  },
  "metadata": {
    "customer_id": "user_123456789",
    "batch_description": "Nightly eval job",
  }
}
Returns Examples
{
  "id": "batch_abc123",
  "object": "batch",
  "endpoint": "/v1/chat/completions",
  "errors": null,
  "input_file_id": "file-abc123",
  "completion_window": "24h",
  "status": "validating",
  "output_file_id": null,
  "error_file_id": null,
  "created_at": 1711471533,
  "in_progress_at": null,
  "expires_at": null,
  "finalizing_at": null,
  "completed_at": null,
  "failed_at": null,
  "expired_at": null,
  "cancelling_at": null,
  "cancelled_at": null,
  "request_counts": {
    "total": 0,
    "completed": 0,
    "failed": 0
  },
  "metadata": {
    "customer_id": "user_123456789",
    "batch_description": "Nightly eval job",
  }
}