Skip to content
Primary navigation

Batches

Create large batches of API requests to run asynchronously.

resource openai_batch

required Expand Collapse
completion_window: String

The time frame within which the batch should be processed. Currently only 24h is supported.

endpoint: String

The endpoint to be used for all requests in the batch. Currently /v1/responses, /v1/chat/completions, /v1/embeddings, /v1/completions, /v1/moderations, /v1/images/generations, /v1/images/edits, and /v1/videos are supported. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch.

input_file_id: String

The ID of an uploaded file that contains requests for the new batch.

See upload file for how to upload a file.

Your input file must be formatted as a JSONL file, and must be uploaded with the purpose batch. The file can contain up to 50,000 requests, and can be up to 200 MB in size.

optional Expand Collapse
metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

output_expires_after?: Attributes

The expiration policy for the output and/or error file that are generated for a batch.

anchor: String

Anchor timestamp after which the expiration policy applies. Supported anchors: created_at. Note that the anchor is the file creation time, not the time the batch is created.

seconds: Int64

The number of seconds after the anchor time that the file will expire. Must be between 3600 (1 hour) and 2592000 (30 days).

computed Expand Collapse
id: String
cancelled_at: Int64

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at: Int64

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at: Int64

The Unix timestamp (in seconds) for when the batch was completed.

created_at: Int64

The Unix timestamp (in seconds) for when the batch was created.

error_file_id: String

The ID of the file containing the outputs of requests with errors.

expired_at: Int64

The Unix timestamp (in seconds) for when the batch expired.

expires_at: Int64

The Unix timestamp (in seconds) for when the batch will expire.

failed_at: Int64

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at: Int64

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at: Int64

The Unix timestamp (in seconds) for when the batch started processing.

model: String

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

object: String

The object type, which is always batch.

output_file_id: String

The ID of the file containing the outputs of successfully executed requests.

status: String

The current status of the batch.

errors: Attributes
data: List[Attributes]
code: String

An error code identifying the error type.

line: Int64

The line number of the input file where the error occurred, if applicable.

message: String

A human-readable message providing more details about the error.

param: String

The name of the parameter that caused the error, if applicable.

object: String

The object type, which is always list.

request_counts: Attributes

The request counts for different statuses within the batch.

completed: Int64

Number of requests that have been completed successfully.

failed: Int64

Number of requests that have failed.

total: Int64

Total number of requests in the batch.

usage: Attributes

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: Int64

The number of input tokens.

input_tokens_details: Attributes

A detailed breakdown of the input tokens.

cached_tokens: Int64

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: Int64

The number of output tokens.

output_tokens_details: Attributes

A detailed breakdown of the output tokens.

reasoning_tokens: Int64

The number of reasoning tokens.

total_tokens: Int64

The total number of tokens used.

openai_batch

resource "openai_batch" "example_batch" {
  completion_window = "24h"
  endpoint = "/v1/responses"
  input_file_id = "input_file_id"
  metadata = {
    foo = "string"
  }
  output_expires_after = {
    anchor = "created_at"
    seconds = 3600
  }
}

data openai_batch

required Expand Collapse
batch_id: String
computed Expand Collapse
id: String
cancelled_at: Int64

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at: Int64

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at: Int64

The Unix timestamp (in seconds) for when the batch was completed.

completion_window: String

The time frame within which the batch should be processed.

created_at: Int64

The Unix timestamp (in seconds) for when the batch was created.

endpoint: String

The OpenAI API endpoint used by the batch.

error_file_id: String

The ID of the file containing the outputs of requests with errors.

expired_at: Int64

The Unix timestamp (in seconds) for when the batch expired.

expires_at: Int64

The Unix timestamp (in seconds) for when the batch will expire.

failed_at: Int64

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at: Int64

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at: Int64

The Unix timestamp (in seconds) for when the batch started processing.

input_file_id: String

The ID of the input file for the batch.

model: String

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

object: String

The object type, which is always batch.

output_file_id: String

The ID of the file containing the outputs of successfully executed requests.

status: String

The current status of the batch.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

errors: Attributes
data: List[Attributes]
code: String

An error code identifying the error type.

line: Int64

The line number of the input file where the error occurred, if applicable.

message: String

A human-readable message providing more details about the error.

param: String

The name of the parameter that caused the error, if applicable.

object: String

The object type, which is always list.

request_counts: Attributes

The request counts for different statuses within the batch.

completed: Int64

Number of requests that have been completed successfully.

failed: Int64

Number of requests that have failed.

total: Int64

Total number of requests in the batch.

usage: Attributes

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: Int64

The number of input tokens.

input_tokens_details: Attributes

A detailed breakdown of the input tokens.

cached_tokens: Int64

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: Int64

The number of output tokens.

output_tokens_details: Attributes

A detailed breakdown of the output tokens.

reasoning_tokens: Int64

The number of reasoning tokens.

total_tokens: Int64

The total number of tokens used.

openai_batch

data "openai_batch" "example_batch" {
  batch_id = "batch_id"
}

data openai_batches

optional Expand Collapse
max_items?: Int64

Max items to fetch, default: 1000

computed Expand Collapse
items: List[Attributes]

The items returned by the data source

id: String
completion_window: String

The time frame within which the batch should be processed.

created_at: Int64

The Unix timestamp (in seconds) for when the batch was created.

endpoint: String

The OpenAI API endpoint used by the batch.

input_file_id: String

The ID of the input file for the batch.

object: String

The object type, which is always batch.

status: String

The current status of the batch.

cancelled_at: Int64

The Unix timestamp (in seconds) for when the batch was cancelled.

cancelling_at: Int64

The Unix timestamp (in seconds) for when the batch started cancelling.

completed_at: Int64

The Unix timestamp (in seconds) for when the batch was completed.

error_file_id: String

The ID of the file containing the outputs of requests with errors.

errors: Attributes
data: List[Attributes]
code: String

An error code identifying the error type.

line: Int64

The line number of the input file where the error occurred, if applicable.

message: String

A human-readable message providing more details about the error.

param: String

The name of the parameter that caused the error, if applicable.

object: String

The object type, which is always list.

expired_at: Int64

The Unix timestamp (in seconds) for when the batch expired.

expires_at: Int64

The Unix timestamp (in seconds) for when the batch will expire.

failed_at: Int64

The Unix timestamp (in seconds) for when the batch failed.

finalizing_at: Int64

The Unix timestamp (in seconds) for when the batch started finalizing.

in_progress_at: Int64

The Unix timestamp (in seconds) for when the batch started processing.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: String

Model ID used to process the batch, like gpt-5-2025-08-07. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

output_file_id: String

The ID of the file containing the outputs of successfully executed requests.

request_counts: Attributes

The request counts for different statuses within the batch.

completed: Int64

Number of requests that have been completed successfully.

failed: Int64

Number of requests that have failed.

total: Int64

Total number of requests in the batch.

usage: Attributes

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. Only populated on batches created after September 7, 2025.

input_tokens: Int64

The number of input tokens.

input_tokens_details: Attributes

A detailed breakdown of the input tokens.

cached_tokens: Int64

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: Int64

The number of output tokens.

output_tokens_details: Attributes

A detailed breakdown of the output tokens.

reasoning_tokens: Int64

The number of reasoning tokens.

total_tokens: Int64

The total number of tokens used.

openai_batches

data "openai_batches" "example_batches" {

}