Skip to content

Cancel batch

client.batches.cancel(stringbatchID, RequestOptionsoptions?): Batch { id, completion_window, created_at, 19 more }
POST/batches/{batch_id}/cancel

Cancels an in-progress batch. The batch will be in status cancelling for up to 10 minutes, before changing to cancelled, where it will have partial results (if any) available in the output file.

ParametersExpand Collapse
batchID: string
ReturnsExpand Collapse
Batch { id, completion_window, created_at, 19 more }
id: string
completion_window: string

The time frame within which the batch should be processed.

created_at: number

The Unix timestamp (in seconds) for when the batch was created.

endpoint: string

The OpenAI API endpoint used by the batch.

input_file_id: string

The ID of the input file for the batch.

object: "batch"

The object type, which is always batch.

status: "validating" | "failed" | "in_progress" | 5 more

The current status of the batch.

Accepts one of the following:
"validating"
"failed"
"in_progress"
"finalizing"
"completed"
"expired"
"cancelling"
"cancelled"
cancelled_at?: number

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at?: number

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at?: number

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id?: string

The ID of the file containing the outputs of requests with errors.

errors?: Errors { data, object }
data?: Array<BatchError { code, line, message, param } >
code?: string

An error code identifying the error type.

line?: number | null

The line number of the input file where the error occurred, if applicable.

message?: string

A human-readable message providing more details about the error.

param?: string | null

The name of the parameter that caused the error, if applicable.

object?: string

The object type, which is always list.

expired_at?: number

The Unix timestamp (in seconds) for when the batch expired.

expires_at?: number

The Unix timestamp (in seconds) for when the batch will expire.

failed_at?: number

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at?: number

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at?: number

The Unix timestamp (in seconds) for when the batch started processing.

metadata?: Metadata | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model?: string

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id?: string

The ID of the file containing the outputs of successfully executed requests.

request_counts?: BatchRequestCounts { completed, failed, total }

The request counts for different statuses within the batch.

completed: number

Number of requests that have been completed successfully.

failed: number

Number of requests that have failed.

total: number

Total number of requests in the batch.

usage?: BatchUsage { input_tokens, input_tokens_details, output_tokens, 2 more }

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

total_tokens: number

The total number of tokens used.

Cancel batch

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});

const batch = await client.batches.cancel('batch_id');

console.log(batch.id);
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}
Returns Examples
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}