Skip to content

Create batch

Batch batches().create(BatchCreateParamsparams, RequestOptionsrequestOptions = RequestOptions.none())
POST/batches

Creates and executes a batch from an uploaded file of requests

ParametersExpand Collapse
BatchCreateParams params
CompletionWindow completionWindow

The time frame within which the batch should be processed. Currently only 24h is supported.

_24H("24h")
Endpoint endpoint

The endpoint to be used for all requests in the batch. Currently /v1/responses, /v1/chat/completions, /v1/embeddings, /v1/completions, and /v1/moderations are supported. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch.

V1_RESPONSES("/v1/responses")
V1_CHAT_COMPLETIONS("/v1/chat/completions")
V1_EMBEDDINGS("/v1/embeddings")
V1_COMPLETIONS("/v1/completions")
V1_MODERATIONS("/v1/moderations")
String inputFileId

The ID of an uploaded file that contains requests for the new batch.

See upload file for how to upload a file.

Your input file must be formatted as a JSONL file, and must be uploaded with the purpose batch. The file can contain up to 50,000 requests, and can be up to 200 MB in size.

Optional<Metadata> metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Optional<OutputExpiresAfter> outputExpiresAfter

The expiration policy for the output and/or error file that are generated for a batch.

JsonValue; anchor "created_at"constant"created_at"constant

Anchor timestamp after which the expiration policy applies. Supported anchors: created_at. Note that the anchor is the file creation time, not the time the batch is created.

long seconds

The number of seconds after the anchor time that the file will expire. Must be between 3600 (1 hour) and 2592000 (30 days).

minimum3600
maximum2592000
ReturnsExpand Collapse
class Batch:
String id
String completionWindow

The time frame within which the batch should be processed.

long createdAt

The Unix timestamp (in seconds) for when the batch was created.

String endpoint

The OpenAI API endpoint used by the batch.

String inputFileId

The ID of the input file for the batch.

JsonValue; object_ "batch"constant"batch"constant

The object type, which is always batch.

Status status

The current status of the batch.

Accepts one of the following:
VALIDATING("validating")
FAILED("failed")
IN_PROGRESS("in_progress")
FINALIZING("finalizing")
COMPLETED("completed")
EXPIRED("expired")
CANCELLING("cancelling")
CANCELLED("cancelled")
Optional<Long> cancelledAt

The Unix timestamp (in seconds) for when the batch was cancelled.

Optional<Long> cancellingAt

The Unix timestamp (in seconds) for when the batch started cancelling.

Optional<Long> completedAt

The Unix timestamp (in seconds) for when the batch was completed.

Optional<String> errorFileId

The ID of the file containing the outputs of requests with errors.

Optional<Errors> errors
Optional<List<BatchError>> data
Optional<String> code

An error code identifying the error type.

Optional<Long> line

The line number of the input file where the error occurred, if applicable.

Optional<String> message

A human-readable message providing more details about the error.

Optional<String> param

The name of the parameter that caused the error, if applicable.

Optional<String> object_

The object type, which is always list.

Optional<Long> expiredAt

The Unix timestamp (in seconds) for when the batch expired.

Optional<Long> expiresAt

The Unix timestamp (in seconds) for when the batch will expire.

Optional<Long> failedAt

The Unix timestamp (in seconds) for when the batch failed.

Optional<Long> finalizingAt

The Unix timestamp (in seconds) for when the batch started finalizing.

Optional<Long> inProgressAt

The Unix timestamp (in seconds) for when the batch started processing.

Optional<Metadata> metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Optional<String> model

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

Optional<String> outputFileId

The ID of the file containing the outputs of successfully executed requests.

Optional<BatchRequestCounts> requestCounts

The request counts for different statuses within the batch.

long completed

Number of requests that have been completed successfully.

long failed

Number of requests that have failed.

long total

Total number of requests in the batch.

Optional<BatchUsage> usage

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

long inputTokens

The number of input tokens.

InputTokensDetails inputTokensDetails

A detailed breakdown of the input tokens.

long cachedTokens

The number of tokens that were retrieved from the cache. More on prompt caching.

long outputTokens

The number of output tokens.

OutputTokensDetails outputTokensDetails

A detailed breakdown of the output tokens.

long reasoningTokens

The number of reasoning tokens.

long totalTokens

The total number of tokens used.

Create batch

package com.openai.example;

import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.openai.models.batches.Batch;
import com.openai.models.batches.BatchCreateParams;

public final class Main {
    private Main() {}

    public static void main(String[] args) {
        OpenAIClient client = OpenAIOkHttpClient.fromEnv();

        BatchCreateParams params = BatchCreateParams.builder()
            .completionWindow(BatchCreateParams.CompletionWindow._24H)
            .endpoint(BatchCreateParams.Endpoint.V1_RESPONSES)
            .inputFileId("input_file_id")
            .build();
        Batch batch = client.batches().create(params);
    }
}
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}
Returns Examples
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}