Skip to content

Create batch

client.batches.create(BatchCreateParams { completion_window, endpoint, input_file_id, 2 more } body, RequestOptionsoptions?): Batch { id, completion_window, created_at, 19 more }
POST/batches

Creates and executes a batch from an uploaded file of requests

ParametersExpand Collapse
body: BatchCreateParams { completion_window, endpoint, input_file_id, 2 more }
completion_window: "24h"

The time frame within which the batch should be processed. Currently only 24h is supported.

endpoint: "/v1/responses" | "/v1/chat/completions" | "/v1/embeddings" | 2 more

The endpoint to be used for all requests in the batch. Currently /v1/responses, /v1/chat/completions, /v1/embeddings, /v1/completions, and /v1/moderations are supported. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch.

Accepts one of the following:
"/v1/responses"
"/v1/chat/completions"
"/v1/embeddings"
"/v1/completions"
"/v1/moderations"
input_file_id: string

The ID of an uploaded file that contains requests for the new batch.

See upload file for how to upload a file.

Your input file must be formatted as a JSONL file, and must be uploaded with the purpose batch. The file can contain up to 50,000 requests, and can be up to 200 MB in size.

metadata?: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

output_expires_after?: OutputExpiresAfter

The expiration policy for the output and/or error file that are generated for a batch.

anchor: "created_at"

Anchor timestamp after which the expiration policy applies. Supported anchors: created_at. Note that the anchor is the file creation time, not the time the batch is created.

seconds: number

The number of seconds after the anchor time that the file will expire. Must be between 3600 (1 hour) and 2592000 (30 days).

minimum3600
maximum2592000
ReturnsExpand Collapse
Batch { id, completion_window, created_at, 19 more }
id: string
completion_window: string

The time frame within which the batch should be processed.

created_at: number

The Unix timestamp (in seconds) for when the batch was created.

endpoint: string

The OpenAI API endpoint used by the batch.

input_file_id: string

The ID of the input file for the batch.

object: "batch"

The object type, which is always batch.

status: "validating" | "failed" | "in_progress" | 5 more

The current status of the batch.

Accepts one of the following:
"validating"
"failed"
"in_progress"
"finalizing"
"completed"
"expired"
"cancelling"
"cancelled"
cancelled_at?: number

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at?: number

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at?: number

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id?: string

The ID of the file containing the outputs of requests with errors.

errors?: Errors { data, object }
data?: Array<BatchError { code, line, message, param } >
code?: string

An error code identifying the error type.

line?: number | null

The line number of the input file where the error occurred, if applicable.

message?: string

A human-readable message providing more details about the error.

param?: string | null

The name of the parameter that caused the error, if applicable.

object?: string

The object type, which is always list.

expired_at?: number

The Unix timestamp (in seconds) for when the batch expired.

expires_at?: number

The Unix timestamp (in seconds) for when the batch will expire.

failed_at?: number

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at?: number

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at?: number

The Unix timestamp (in seconds) for when the batch started processing.

metadata?: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model?: string

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id?: string

The ID of the file containing the outputs of successfully executed requests.

request_counts?: BatchRequestCounts { completed, failed, total }

The request counts for different statuses within the batch.

completed: number

Number of requests that have been completed successfully.

failed: number

Number of requests that have failed.

total: number

Total number of requests in the batch.

usage?: BatchUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Create batch

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});

const batch = await client.batches.create({
  completion_window: '24h',
  endpoint: '/v1/responses',
  input_file_id: 'input_file_id',
});

console.log(batch.id);
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}
Returns Examples
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}