Skip to content

Cancel batch

batches.cancel(strbatch_id) -> Batch
POST/batches/{batch_id}/cancel

Cancels an in-progress batch. The batch will be in status cancelling for up to 10 minutes, before changing to cancelled, where it will have partial results (if any) available in the output file.

ParametersExpand Collapse
batch_id: str
ReturnsExpand Collapse
class Batch: …
id: str
completion_window: str

The time frame within which the batch should be processed.

created_at: int

The Unix timestamp (in seconds) for when the batch was created.

endpoint: str

The OpenAI API endpoint used by the batch.

input_file_id: str

The ID of the input file for the batch.

object: Literal["batch"]

The object type, which is always batch.

status: Literal["validating", "failed", "in_progress", 5 more]

The current status of the batch.

Accepts one of the following:
"validating"
"failed"
"in_progress"
"finalizing"
"completed"
"expired"
"cancelling"
"cancelled"
cancelled_at: Optional[int]

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at: Optional[int]

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id: Optional[str]

The ID of the file containing the outputs of requests with errors.

errors: Optional[Errors]
data: Optional[List[BatchError]]
code: Optional[str]

An error code identifying the error type.

line: Optional[int]

The line number of the input file where the error occurred, if applicable.

message: Optional[str]

A human-readable message providing more details about the error.

param: Optional[str]

The name of the parameter that caused the error, if applicable.

object: Optional[str]

The object type, which is always list.

expired_at: Optional[int]

The Unix timestamp (in seconds) for when the batch expired.

expires_at: Optional[int]

The Unix timestamp (in seconds) for when the batch will expire.

failed_at: Optional[int]

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at: Optional[int]

The Unix timestamp (in seconds) for when the batch started processing.

metadata: Optional[Metadata]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: Optional[str]

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id: Optional[str]

The ID of the file containing the outputs of successfully executed requests.

request_counts: Optional[BatchRequestCounts]

The request counts for different statuses within the batch.

completed: int

Number of requests that have been completed successfully.

failed: int

Number of requests that have failed.

total: int

Total number of requests in the batch.

usage: Optional[BatchUsage]

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: int

The number of input tokens.

input_tokens_details: InputTokensDetails

A detailed breakdown of the input tokens.

cached_tokens: int

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: int

The number of output tokens.

output_tokens_details: OutputTokensDetails

A detailed breakdown of the output tokens.

reasoning_tokens: int

The number of reasoning tokens.

total_tokens: int

The total number of tokens used.

Cancel batch

from openai import OpenAI
client = OpenAI()

client.batches.cancel("batch_abc123")
{
  "id": "batch_abc123",
  "object": "batch",
  "endpoint": "/v1/chat/completions",
  "errors": null,
  "input_file_id": "file-abc123",
  "completion_window": "24h",
  "status": "cancelling",
  "output_file_id": null,
  "error_file_id": null,
  "created_at": 1711471533,
  "in_progress_at": 1711471538,
  "expires_at": 1711557933,
  "finalizing_at": null,
  "completed_at": null,
  "failed_at": null,
  "expired_at": null,
  "cancelling_at": 1711475133,
  "cancelled_at": null,
  "request_counts": {
    "total": 100,
    "completed": 23,
    "failed": 1
  },
  "metadata": {
    "customer_id": "user_123456789",
    "batch_description": "Nightly eval job",
  }
}
Returns Examples
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}