Skip to content

Create batch

client.Batches.New(ctx, body) (*Batch, error)
POST/batches

Creates and executes a batch from an uploaded file of requests

ParametersExpand Collapse
body BatchNewParams
CompletionWindow param.Field[BatchNewParamsCompletionWindow]

The time frame within which the batch should be processed. Currently only 24h is supported.

const BatchNewParamsCompletionWindow24h BatchNewParamsCompletionWindow = "24h"
Endpoint param.Field[BatchNewParamsEndpoint]

The endpoint to be used for all requests in the batch. Currently /v1/responses, /v1/chat/completions, /v1/embeddings, /v1/completions, and /v1/moderations are supported. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch.

const BatchNewParamsEndpointV1Responses BatchNewParamsEndpoint = "/v1/responses"
const BatchNewParamsEndpointV1ChatCompletions BatchNewParamsEndpoint = "/v1/chat/completions"
const BatchNewParamsEndpointV1Embeddings BatchNewParamsEndpoint = "/v1/embeddings"
const BatchNewParamsEndpointV1Completions BatchNewParamsEndpoint = "/v1/completions"
const BatchNewParamsEndpointV1Moderations BatchNewParamsEndpoint = "/v1/moderations"
InputFileID param.Field[string]

The ID of an uploaded file that contains requests for the new batch.

See upload file for how to upload a file.

Your input file must be formatted as a JSONL file, and must be uploaded with the purpose batch. The file can contain up to 50,000 requests, and can be up to 200 MB in size.

Metadata param.Field[Metadata]optional

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

OutputExpiresAfter param.Field[BatchNewParamsOutputExpiresAfter]optional

The expiration policy for the output and/or error file that are generated for a batch.

Anchor CreatedAt

Anchor timestamp after which the expiration policy applies. Supported anchors: created_at. Note that the anchor is the file creation time, not the time the batch is created.

Seconds int64

The number of seconds after the anchor time that the file will expire. Must be between 3600 (1 hour) and 2592000 (30 days).

minimum3600
maximum2592000
ReturnsExpand Collapse
type Batch struct{…}
ID string
CompletionWindow string

The time frame within which the batch should be processed.

CreatedAt int64

The Unix timestamp (in seconds) for when the batch was created.

Endpoint string

The OpenAI API endpoint used by the batch.

InputFileID string

The ID of the input file for the batch.

Object Batch

The object type, which is always batch.

Status BatchStatus

The current status of the batch.

Accepts one of the following:
const BatchStatusValidating BatchStatus = "validating"
const BatchStatusFailed BatchStatus = "failed"
const BatchStatusInProgress BatchStatus = "in_progress"
const BatchStatusFinalizing BatchStatus = "finalizing"
const BatchStatusCompleted BatchStatus = "completed"
const BatchStatusExpired BatchStatus = "expired"
const BatchStatusCancelling BatchStatus = "cancelling"
const BatchStatusCancelled BatchStatus = "cancelled"
CancelledAt int64optional

The Unix timestamp (in seconds) for when the batch was cancelled.

CancellingAt int64optional

The Unix timestamp (in seconds) for when the batch started cancelling.

CompletedAt int64optional

The Unix timestamp (in seconds) for when the batch was completed.

ErrorFileID stringoptional

The ID of the file containing the outputs of requests with errors.

Errors BatchErrorsoptional
Data []BatchErroroptional
Code stringoptional

An error code identifying the error type.

Line int64optional

The line number of the input file where the error occurred, if applicable.

Message stringoptional

A human-readable message providing more details about the error.

Param stringoptional

The name of the parameter that caused the error, if applicable.

Object stringoptional

The object type, which is always list.

ExpiredAt int64optional

The Unix timestamp (in seconds) for when the batch expired.

ExpiresAt int64optional

The Unix timestamp (in seconds) for when the batch will expire.

FailedAt int64optional

The Unix timestamp (in seconds) for when the batch failed.

FinalizingAt int64optional

The Unix timestamp (in seconds) for when the batch started finalizing.

InProgressAt int64optional

The Unix timestamp (in seconds) for when the batch started processing.

Metadata Metadataoptional

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Model stringoptional

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

OutputFileID stringoptional

The ID of the file containing the outputs of successfully executed requests.

RequestCounts BatchRequestCountsoptional

The request counts for different statuses within the batch.

Completed int64

Number of requests that have been completed successfully.

Failed int64

Number of requests that have failed.

Total int64

Total number of requests in the batch.

Usage BatchUsageoptional

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

InputTokens int64

The number of input tokens.

InputTokensDetails BatchUsageInputTokensDetails

A detailed breakdown of the input tokens.

CachedTokens int64

The number of tokens that were retrieved from the cache. More on prompt caching.

OutputTokens int64

The number of output tokens.

OutputTokensDetails BatchUsageOutputTokensDetails

A detailed breakdown of the output tokens.

ReasoningTokens int64

The number of reasoning tokens.

TotalTokens int64

The total number of tokens used.

Create batch

package main

import (
  "context"
  "fmt"

  "github.com/openai/openai-go"
  "github.com/openai/openai-go/option"
)

func main() {
  client := openai.NewClient(
    option.WithAPIKey("My API Key"),
  )
  batch, err := client.Batches.New(context.TODO(), openai.BatchNewParams{
    CompletionWindow: openai.BatchNewParamsCompletionWindow24h,
    Endpoint: openai.BatchNewParamsEndpointV1Responses,
    InputFileID: "input_file_id",
  })
  if err != nil {
    panic(err.Error())
  }
  fmt.Printf("%+v\n", batch.ID)
}
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}
Returns Examples
{
  "id": "id",
  "completion_window": "completion_window",
  "created_at": 0,
  "endpoint": "endpoint",
  "input_file_id": "input_file_id",
  "object": "batch",
  "status": "validating",
  "cancelled_at": 0,
  "cancelling_at": 0,
  "completed_at": 0,
  "error_file_id": "error_file_id",
  "errors": {
    "data": [
      {
        "code": "code",
        "line": 0,
        "message": "message",
        "param": "param"
      }
    ],
    "object": "object"
  },
  "expired_at": 0,
  "expires_at": 0,
  "failed_at": 0,
  "finalizing_at": 0,
  "in_progress_at": 0,
  "metadata": {
    "foo": "string"
  },
  "model": "model",
  "output_file_id": "output_file_id",
  "request_counts": {
    "completed": 0,
    "failed": 0,
    "total": 0
  },
  "usage": {
    "input_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 0,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 0
  }
}