Skip to content
Primary navigation

Beta

BetaChatKitSessions

resource openai_beta_chatkit_session

required Expand Collapse
user: String

A free-form string that identifies your end user; ensures this Session can access other objects that have the same user scope.

workflow: Attributes

Workflow that powers the session.

id: String

Identifier for the workflow invoked by the session.

state_variables?: Dynamic

State variables forwarded to the workflow. Keys may be up to 64 characters, values must be primitive types, and the map defaults to an empty object.

tracing?: Attributes

Optional tracing overrides for the workflow invocation. When omitted, tracing is enabled by default.

enabled?: Bool

Whether tracing is enabled during the session. Defaults to true.

version?: String

Specific workflow version to run. Defaults to the latest deployed version.

optional Expand Collapse
chatkit_configuration?: Attributes

Optional overrides for ChatKit runtime configuration features

automatic_thread_titling?: Attributes

Configuration for automatic thread titling. When omitted, automatic thread titling is enabled by default.

enabled?: Bool

Enable automatic thread title generation. Defaults to true.

file_upload?: Attributes

Configuration for upload enablement and limits. When omitted, uploads are disabled by default (max_files 10, max_file_size 512 MB).

enabled?: Bool

Enable uploads for this session. Defaults to false.

max_file_size?: Int64

Maximum size in megabytes for each uploaded file. Defaults to 512 MB, which is the maximum allowable size.

max_files?: Int64

Maximum number of files that can be uploaded to the session. Defaults to 10.

history?: Attributes

Configuration for chat history retention. When omitted, history is enabled by default with no limit on recent_threads (null).

enabled?: Bool

Enables chat users to access previous ChatKit threads. Defaults to true.

recent_threads?: Int64

Number of recent ChatKit threads users have access to. Defaults to unlimited when unset.

rate_limits?: Attributes

Optional override for per-minute request limits. When omitted, defaults to 10.

max_requests_per_1_minute?: Int64

Maximum number of requests allowed per minute for the session. Defaults to 10.

expires_after?: Attributes

Optional override for session expiration timing in seconds from creation. Defaults to 10 minutes.

anchor?: String

Base timestamp used to calculate expiration. Currently fixed to created_at.

seconds: Int64

Number of seconds after the anchor when the session expires.

computed Expand Collapse
id: String

Identifier for the ChatKit session.

client_secret: String

Ephemeral client secret that authenticates session requests.

expires_at: Int64

Unix timestamp (in seconds) for when the session expires.

max_requests_per_1_minute: Int64

Convenience copy of the per-minute request limit.

object: String

Type discriminator that is always chatkit.session.

status: String

Current lifecycle state of the session.

openai_beta_chatkit_session

resource "openai_beta_chatkit_session" "example_beta_chatkit_session" {
  user = "x"
  workflow = {
    id = "id"
    state_variables = {
      foo = "string"
    }
    tracing = {
      enabled = true
    }
    version = "version"
  }
  chatkit_configuration = {
    automatic_thread_titling = {
      enabled = true
    }
    file_upload = {
      enabled = true
      max_file_size = 1
      max_files = 1
    }
    history = {
      enabled = true
      recent_threads = 1
    }
  }
  expires_after = {
    anchor = "created_at"
    seconds = 1
  }
  rate_limits = {
    max_requests_per_1_minute = 1
  }
}

BetaChatKitThreads

data openai_beta_chatkit_thread

required Expand Collapse
thread_id: String
computed Expand Collapse
created_at: Int64

Unix timestamp (in seconds) for when the thread was created.

id: String

Identifier of the thread.

object: String

Type discriminator that is always chatkit.thread.

title: String

Optional human-readable title for the thread. Defaults to null when no title has been generated.

user: String

Free-form string that identifies your end user who owns the thread.

status: Attributes

Current status for the thread. Defaults to active for newly created threads.

type: String

Status discriminator that is always active.

reason: String

Reason that the thread was locked. Defaults to null when no reason is recorded.

openai_beta_chatkit_thread

data "openai_beta_chatkit_thread" "example_beta_chatkit_thread" {
  thread_id = "cthr_123"
}

data openai_beta_chatkit_threads

optional Expand Collapse
before?: String

List items created before this thread item ID. Defaults to null for the newest results.

order?: String

Sort order for results by creation time. Defaults to desc.

user?: String

Filter threads that belong to this user identifier. Defaults to null to return all users.

max_items?: Int64

Max items to fetch, default: 1000

computed Expand Collapse
items: List[Attributes]

The items returned by the data source

id: String

Identifier of the thread.

created_at: Int64

Unix timestamp (in seconds) for when the thread was created.

object: String

Type discriminator that is always chatkit.thread.

status: Attributes

Current status for the thread. Defaults to active for newly created threads.

type: String

Status discriminator that is always active.

reason: String

Reason that the thread was locked. Defaults to null when no reason is recorded.

title: String

Optional human-readable title for the thread. Defaults to null when no title has been generated.

user: String

Free-form string that identifies your end user who owns the thread.

openai_beta_chatkit_threads

data "openai_beta_chatkit_threads" "example_beta_chatkit_threads" {
  before = "before"
  order = "asc"
  user = "x"
}

BetaAssistants

Build Assistants that can call models and use tools.

resource openai_beta_assistant

required Expand Collapse
model: String

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

optional Expand Collapse
description?: String

The description of the assistant. The maximum length is 512 characters.

instructions?: String

The system instructions that the assistant uses. The maximum length is 256,000 characters.

name?: String

The name of the assistant. The maximum length is 256 characters.

response_format?: String

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

reasoning_effort?: String

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
temperature?: Float64

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

top_p?: Float64

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

tool_resources?: Attributes

A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

code_interpreter?: Attributes
file_ids?: List[String]

A list of file IDs made available to the code_interpreter tool. There can be a maximum of 20 files associated with the tool.

tools?: List[Attributes]

A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, file_search, or function.

type: String

The type of tool being defined: code_interpreter

function?: Attributes
name: String

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description?: String

A description of what the function does, used by the model to choose when and how to call the function.

parameters?: Map[JSON]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict?: Bool

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

computed Expand Collapse
id: String

The identifier, which can be referenced in API endpoints.

created_at: Int64

The Unix timestamp (in seconds) for when the assistant was created.

object: String

The object type, which is always assistant.

openai_beta_assistant

resource "openai_beta_assistant" "example_beta_assistant" {
  model = "gpt-4o"
  description = "description"
  instructions = "instructions"
  metadata = {
    foo = "string"
  }
  name = "name"
  reasoning_effort = "none"
  response_format = "auto"
  temperature = 1
  tool_resources = {
    code_interpreter = {
      file_ids = ["string"]
    }
    file_search = {
      vector_store_ids = ["string"]
      vector_stores = [{
        chunking_strategy = {
          type = "auto"
        }
        file_ids = ["string"]
        metadata = {
          foo = "string"
        }
      }]
    }
  }
  tools = [{
    type = "code_interpreter"
  }]
  top_p = 1
}

data openai_beta_assistant

optional Expand Collapse
assistant_id?: String
find_one_by?: Attributes
before?: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

order?: String

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

computed Expand Collapse
id: String
created_at: Int64

The Unix timestamp (in seconds) for when the assistant was created.

description: String

The description of the assistant. The maximum length is 512 characters.

instructions: String

The system instructions that the assistant uses. The maximum length is 256,000 characters.

model: String

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

name: String

The name of the assistant. The maximum length is 256 characters.

object: String

The object type, which is always assistant.

response_format: String

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

temperature: Float64

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

top_p: Float64

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

tool_resources: Attributes

A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

code_interpreter: Attributes
file_ids: List[String]

A list of file IDs made available to the `code_interpreter“ tool. There can be a maximum of 20 files associated with the tool.

tools: List[Attributes]

A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, file_search, or function.

type: String

The type of tool being defined: code_interpreter

function: Attributes
name: String

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: String

A description of what the function does, used by the model to choose when and how to call the function.

parameters: Map[JSON]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict: Bool

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

openai_beta_assistant

data "openai_beta_assistant" "example_beta_assistant" {
  assistant_id = "assistant_id"
}

data openai_beta_assistants

optional Expand Collapse
before?: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

order?: String

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

max_items?: Int64

Max items to fetch, default: 1000

computed Expand Collapse
items: List[Attributes]

The items returned by the data source

id: String

The identifier, which can be referenced in API endpoints.

created_at: Int64

The Unix timestamp (in seconds) for when the assistant was created.

description: String

The description of the assistant. The maximum length is 512 characters.

instructions: String

The system instructions that the assistant uses. The maximum length is 256,000 characters.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: String

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

name: String

The name of the assistant. The maximum length is 256 characters.

object: String

The object type, which is always assistant.

tools: List[Attributes]

A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, file_search, or function.

type: String

The type of tool being defined: code_interpreter

function: Attributes
name: String

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: String

A description of what the function does, used by the model to choose when and how to call the function.

parameters: Map[JSON]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict: Bool

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

response_format: String

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

temperature: Float64

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

tool_resources: Attributes

A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

code_interpreter: Attributes
file_ids: List[String]

A list of file IDs made available to the `code_interpreter“ tool. There can be a maximum of 20 files associated with the tool.

top_p: Float64

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

openai_beta_assistants

data "openai_beta_assistants" "example_beta_assistants" {
  before = "before"
}

BetaThreads

Build Assistants that can call models and use tools.

resource openai_beta_thread

optional Expand Collapse
messages?: List[Attributes]

A list of messages to start the thread with.

content: String

The text contents of the message.

role: String

The role of the entity that is creating the message. Allowed values include:

  • user: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.
  • assistant: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.
attachments?: List[Attributes]

A list of files attached to the message, and the tools they should be added to.

file_id?: String

The ID of the file to attach to the message.

tools?: List[Attributes]

The tools to add this file to.

type: String

The type of tool being defined: code_interpreter

metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

tool_resources?: Attributes

A set of resources that are made available to the assistant’s tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

code_interpreter?: Attributes
file_ids?: List[String]

A list of file IDs made available to the code_interpreter tool. There can be a maximum of 20 files associated with the tool.

computed Expand Collapse
id: String

The identifier, which can be referenced in API endpoints.

created_at: Int64

The Unix timestamp (in seconds) for when the thread was created.

object: String

The object type, which is always thread.

openai_beta_thread

resource "openai_beta_thread" "example_beta_thread" {
  messages = [{
    content = "string"
    role = "user"
    attachments = [{
      file_id = "file_id"
      tools = [{
        type = "code_interpreter"
      }]
    }]
    metadata = {
      foo = "string"
    }
  }]
  metadata = {
    foo = "string"
  }
  tool_resources = {
    code_interpreter = {
      file_ids = ["string"]
    }
    file_search = {
      vector_store_ids = ["string"]
      vector_stores = [{
        chunking_strategy = {
          type = "auto"
        }
        file_ids = ["string"]
        metadata = {
          foo = "string"
        }
      }]
    }
  }
}

data openai_beta_thread

required Expand Collapse
thread_id: String
computed Expand Collapse
id: String
created_at: Int64

The Unix timestamp (in seconds) for when the thread was created.

object: String

The object type, which is always thread.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

tool_resources: Attributes

A set of resources that are made available to the assistant’s tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

code_interpreter: Attributes
file_ids: List[String]

A list of file IDs made available to the code_interpreter tool. There can be a maximum of 20 files associated with the tool.

openai_beta_thread

data "openai_beta_thread" "example_beta_thread" {
  thread_id = "thread_id"
}

BetaThreadsRuns

Build Assistants that can call models and use tools.

resource openai_beta_thread_run

required Expand Collapse
thread_id: String
assistant_id: String

The ID of the assistant to use to execute this run.

optional Expand Collapse
additional_instructions?: String

Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.

instructions?: String

Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run basis.

max_completion_tokens?: Int64

The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status incomplete. See incomplete_details for more info.

max_prompt_tokens?: Int64

The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete. See incomplete_details for more info.

model?: String

The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.

response_format?: String

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

stream?: Bool

If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.

tool_choice?: String

Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

additional_messages?: List[Attributes]

Adds additional messages to the thread before creating the run.

content: String

The text contents of the message.

role: String

The role of the entity that is creating the message. Allowed values include:

  • user: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.
  • assistant: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.
attachments?: List[Attributes]

A list of files attached to the message, and the tools they should be added to.

file_id?: String

The ID of the file to attach to the message.

tools?: List[Attributes]

The tools to add this file to.

type: String

The type of tool being defined: code_interpreter

metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

truncation_strategy?: Attributes

Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run.

type: String

The truncation strategy to use for the thread. The default is auto. If set to last_messages, the thread will be truncated to the n most recent messages in the thread. When set to auto, messages in the middle of the thread will be dropped to fit the context length of the model, max_prompt_tokens.

last_messages?: Int64

The number of most recent messages from the thread when constructing the context for the run.

parallel_tool_calls?: Bool

Whether to enable parallel function calling during tool use.

reasoning_effort?: String

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.
  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.
  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.
  • xhigh is supported for all models after gpt-5.1-codex-max.
temperature?: Float64

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

top_p?: Float64

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

tools?: List[Attributes]

Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.

type: String

The type of tool being defined: code_interpreter

function?: Attributes
name: String

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description?: String

A description of what the function does, used by the model to choose when and how to call the function.

parameters?: Map[JSON]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict?: Bool

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

computed Expand Collapse
id: String

The identifier, which can be referenced in API endpoints.

cancelled_at: Int64

The Unix timestamp (in seconds) for when the run was cancelled.

completed_at: Int64

The Unix timestamp (in seconds) for when the run was completed.

created_at: Int64

The Unix timestamp (in seconds) for when the run was created.

expires_at: Int64

The Unix timestamp (in seconds) for when the run will expire.

failed_at: Int64

The Unix timestamp (in seconds) for when the run failed.

object: String

The object type, which is always thread.run.

started_at: Int64

The Unix timestamp (in seconds) for when the run was started.

status: String

The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.

incomplete_details: Attributes

Details on why the run is incomplete. Will be null if the run is not incomplete.

reason: String

The reason why the run is incomplete. This will point to which specific token limit was reached over the course of the run.

last_error: Attributes

The last error associated with this run. Will be null if there are no errors.

code: String

One of server_error, rate_limit_exceeded, or invalid_prompt.

message: String

A human-readable description of the error.

required_action: Attributes

Details on the action required to continue the run. Will be null if no action is required.

submit_tool_outputs: Attributes

Details on the tool outputs needed for this run to continue.

tool_calls: List[Attributes]

A list of the relevant tool calls.

id: String

The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint.

function: Attributes

The function definition.

arguments: String

The arguments that the model expects you to pass to the function.

name: String

The name of the function.

type: String

The type of tool call the output is required for. For now, this is always function.

type: String

For now, this is always submit_tool_outputs.

usage: Attributes

Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).

completion_tokens: Int64

Number of completion tokens used over the course of the run.

prompt_tokens: Int64

Number of prompt tokens used over the course of the run.

total_tokens: Int64

Total number of tokens used (prompt + completion).

openai_beta_thread_run

resource "openai_beta_thread_run" "example_beta_thread_run" {
  thread_id = "thread_id"
  assistant_id = "assistant_id"
  additional_instructions = "additional_instructions"
  additional_messages = [{
    content = "string"
    role = "user"
    attachments = [{
      file_id = "file_id"
      tools = [{
        type = "code_interpreter"
      }]
    }]
    metadata = {
      foo = "string"
    }
  }]
  instructions = "instructions"
  max_completion_tokens = 256
  max_prompt_tokens = 256
  metadata = {
    foo = "string"
  }
  model = "string"
  parallel_tool_calls = true
  reasoning_effort = "none"
  response_format = "auto"
  stream = false
  temperature = 1
  tool_choice = "none"
  tools = [{
    type = "code_interpreter"
  }]
  top_p = 1
  truncation_strategy = {
    type = "auto"
    last_messages = 1
  }
}

data openai_beta_thread_run

required Expand Collapse
thread_id: String
optional Expand Collapse
run_id?: String
find_one_by?: Attributes
before?: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

order?: String

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

computed Expand Collapse
id: String
assistant_id: String

The ID of the assistant used for execution of this run.

cancelled_at: Int64

The Unix timestamp (in seconds) for when the run was cancelled.

completed_at: Int64

The Unix timestamp (in seconds) for when the run was completed.

created_at: Int64

The Unix timestamp (in seconds) for when the run was created.

expires_at: Int64

The Unix timestamp (in seconds) for when the run will expire.

failed_at: Int64

The Unix timestamp (in seconds) for when the run failed.

instructions: String

The instructions that the assistant used for this run.

max_completion_tokens: Int64

The maximum number of completion tokens specified to have been used over the course of the run.

max_prompt_tokens: Int64

The maximum number of prompt tokens specified to have been used over the course of the run.

model: String

The model that the assistant used for this run.

object: String

The object type, which is always thread.run.

parallel_tool_calls: Bool

Whether to enable parallel function calling during tool use.

response_format: String

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

started_at: Int64

The Unix timestamp (in seconds) for when the run was started.

status: String

The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.

temperature: Float64

The sampling temperature used for this run. If not set, defaults to 1.

tool_choice: String

Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

top_p: Float64

The nucleus sampling value used for this run. If not set, defaults to 1.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

incomplete_details: Attributes

Details on why the run is incomplete. Will be null if the run is not incomplete.

reason: String

The reason why the run is incomplete. This will point to which specific token limit was reached over the course of the run.

last_error: Attributes

The last error associated with this run. Will be null if there are no errors.

code: String

One of server_error, rate_limit_exceeded, or invalid_prompt.

message: String

A human-readable description of the error.

required_action: Attributes

Details on the action required to continue the run. Will be null if no action is required.

submit_tool_outputs: Attributes

Details on the tool outputs needed for this run to continue.

tool_calls: List[Attributes]

A list of the relevant tool calls.

id: String

The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint.

function: Attributes

The function definition.

arguments: String

The arguments that the model expects you to pass to the function.

name: String

The name of the function.

type: String

The type of tool call the output is required for. For now, this is always function.

type: String

For now, this is always submit_tool_outputs.

tools: List[Attributes]

The list of tools that the assistant used for this run.

type: String

The type of tool being defined: code_interpreter

function: Attributes
name: String

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: String

A description of what the function does, used by the model to choose when and how to call the function.

parameters: Map[JSON]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict: Bool

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

truncation_strategy: Attributes

Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run.

type: String

The truncation strategy to use for the thread. The default is auto. If set to last_messages, the thread will be truncated to the n most recent messages in the thread. When set to auto, messages in the middle of the thread will be dropped to fit the context length of the model, max_prompt_tokens.

last_messages: Int64

The number of most recent messages from the thread when constructing the context for the run.

usage: Attributes

Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).

completion_tokens: Int64

Number of completion tokens used over the course of the run.

prompt_tokens: Int64

Number of prompt tokens used over the course of the run.

total_tokens: Int64

Total number of tokens used (prompt + completion).

openai_beta_thread_run

data "openai_beta_thread_run" "example_beta_thread_run" {
  thread_id = "thread_id"
  run_id = "run_id"
}

data openai_beta_thread_runs

required Expand Collapse
thread_id: String
optional Expand Collapse
before?: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

order?: String

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

max_items?: Int64

Max items to fetch, default: 1000

computed Expand Collapse
items: List[Attributes]

The items returned by the data source

id: String

The identifier, which can be referenced in API endpoints.

assistant_id: String

The ID of the assistant used for execution of this run.

cancelled_at: Int64

The Unix timestamp (in seconds) for when the run was cancelled.

completed_at: Int64

The Unix timestamp (in seconds) for when the run was completed.

created_at: Int64

The Unix timestamp (in seconds) for when the run was created.

expires_at: Int64

The Unix timestamp (in seconds) for when the run will expire.

failed_at: Int64

The Unix timestamp (in seconds) for when the run failed.

incomplete_details: Attributes

Details on why the run is incomplete. Will be null if the run is not incomplete.

reason: String

The reason why the run is incomplete. This will point to which specific token limit was reached over the course of the run.

instructions: String

The instructions that the assistant used for this run.

last_error: Attributes

The last error associated with this run. Will be null if there are no errors.

code: String

One of server_error, rate_limit_exceeded, or invalid_prompt.

message: String

A human-readable description of the error.

max_completion_tokens: Int64

The maximum number of completion tokens specified to have been used over the course of the run.

max_prompt_tokens: Int64

The maximum number of prompt tokens specified to have been used over the course of the run.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: String

The model that the assistant used for this run.

object: String

The object type, which is always thread.run.

parallel_tool_calls: Bool

Whether to enable parallel function calling during tool use.

required_action: Attributes

Details on the action required to continue the run. Will be null if no action is required.

submit_tool_outputs: Attributes

Details on the tool outputs needed for this run to continue.

tool_calls: List[Attributes]

A list of the relevant tool calls.

id: String

The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint.

function: Attributes

The function definition.

arguments: String

The arguments that the model expects you to pass to the function.

name: String

The name of the function.

type: String

The type of tool call the output is required for. For now, this is always function.

type: String

For now, this is always submit_tool_outputs.

response_format: String

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

started_at: Int64

The Unix timestamp (in seconds) for when the run was started.

status: String

The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.

thread_id: String

The ID of the thread that was executed on as a part of this run.

tool_choice: String

Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

tools: List[Attributes]

The list of tools that the assistant used for this run.

type: String

The type of tool being defined: code_interpreter

function: Attributes
name: String

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: String

A description of what the function does, used by the model to choose when and how to call the function.

parameters: Map[JSON]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict: Bool

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

truncation_strategy: Attributes

Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run.

type: String

The truncation strategy to use for the thread. The default is auto. If set to last_messages, the thread will be truncated to the n most recent messages in the thread. When set to auto, messages in the middle of the thread will be dropped to fit the context length of the model, max_prompt_tokens.

last_messages: Int64

The number of most recent messages from the thread when constructing the context for the run.

usage: Attributes

Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).

completion_tokens: Int64

Number of completion tokens used over the course of the run.

prompt_tokens: Int64

Number of prompt tokens used over the course of the run.

total_tokens: Int64

Total number of tokens used (prompt + completion).

temperature: Float64

The sampling temperature used for this run. If not set, defaults to 1.

top_p: Float64

The nucleus sampling value used for this run. If not set, defaults to 1.

openai_beta_thread_runs

data "openai_beta_thread_runs" "example_beta_thread_runs" {
  thread_id = "thread_id"
  before = "before"
}

BetaThreadsRunsSteps

Build Assistants that can call models and use tools.

data openai_beta_thread_run_step

required Expand Collapse
run_id: String
step_id: String
thread_id: String
optional Expand Collapse
include?: List[String]

A list of additional fields to include in the response. Currently the only supported value is step_details.tool_calls[*].file_search.results[*].content to fetch the file search result content.

See the file search tool documentation for more information.

computed Expand Collapse
assistant_id: String

The ID of the assistant associated with the run step.

cancelled_at: Int64

The Unix timestamp (in seconds) for when the run step was cancelled.

completed_at: Int64

The Unix timestamp (in seconds) for when the run step completed.

created_at: Int64

The Unix timestamp (in seconds) for when the run step was created.

expired_at: Int64

The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.

failed_at: Int64

The Unix timestamp (in seconds) for when the run step failed.

id: String

The identifier of the run step, which can be referenced in API endpoints.

object: String

The object type, which is always thread.run.step.

status: String

The status of the run step, which can be either in_progress, cancelled, failed, completed, or expired.

type: String

The type of run step, which can be either message_creation or tool_calls.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

last_error: Attributes

The last error associated with this run step. Will be null if there are no errors.

code: String

One of server_error or rate_limit_exceeded.

message: String

A human-readable description of the error.

step_details: Attributes

The details of the run step.

message_creation: Attributes
message_id: String

The ID of the message that was created by this run step.

type: String

Always message_creation.

tool_calls: List[Attributes]

An array of tool calls the run step was involved in. These can be associated with one of three types of tools: code_interpreter, file_search, or function.

id: String

The ID of the tool call.

code_interpreter: Attributes

The Code Interpreter tool call definition.

input: String

The input to the Code Interpreter tool call.

outputs: List[Attributes]

The outputs from the Code Interpreter tool call. Code Interpreter can output one or more items, including text (logs) or images (image). Each of these are represented by a different object type.

logs: String

The text output from the Code Interpreter tool call.

type: String

Always logs.

image: Attributes
file_id: String

The file ID of the image.

type: String

The type of tool call. This is always going to be code_interpreter for this type of tool call.

function: Attributes

The definition of the function that was called.

arguments: String

The arguments passed to the function.

name: String

The name of the function.

output: String

The output of the function. This will be null if the outputs have not been submitted yet.

usage: Attributes

Usage statistics related to the run step. This value will be null while the run step’s status is in_progress.

completion_tokens: Int64

Number of completion tokens used over the course of the run step.

prompt_tokens: Int64

Number of prompt tokens used over the course of the run step.

total_tokens: Int64

Total number of tokens used (prompt + completion).

openai_beta_thread_run_step

data "openai_beta_thread_run_step" "example_beta_thread_run_step" {
  thread_id = "thread_id"
  run_id = "run_id"
  step_id = "step_id"
  include = ["step_details.tool_calls[*].file_search.results[*].content"]
}

data openai_beta_thread_run_steps

required Expand Collapse
run_id: String
thread_id: String
optional Expand Collapse
before?: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

include?: List[String]

A list of additional fields to include in the response. Currently the only supported value is step_details.tool_calls[*].file_search.results[*].content to fetch the file search result content.

See the file search tool documentation for more information.

order?: String

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

max_items?: Int64

Max items to fetch, default: 1000

computed Expand Collapse
items: List[Attributes]

The items returned by the data source

id: String

The identifier of the run step, which can be referenced in API endpoints.

assistant_id: String

The ID of the assistant associated with the run step.

cancelled_at: Int64

The Unix timestamp (in seconds) for when the run step was cancelled.

completed_at: Int64

The Unix timestamp (in seconds) for when the run step completed.

created_at: Int64

The Unix timestamp (in seconds) for when the run step was created.

expired_at: Int64

The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.

failed_at: Int64

The Unix timestamp (in seconds) for when the run step failed.

last_error: Attributes

The last error associated with this run step. Will be null if there are no errors.

code: String

One of server_error or rate_limit_exceeded.

message: String

A human-readable description of the error.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

object: String

The object type, which is always thread.run.step.

run_id: String

The ID of the run that this run step is a part of.

status: String

The status of the run step, which can be either in_progress, cancelled, failed, completed, or expired.

step_details: Attributes

The details of the run step.

message_creation: Attributes
message_id: String

The ID of the message that was created by this run step.

type: String

Always message_creation.

tool_calls: List[Attributes]

An array of tool calls the run step was involved in. These can be associated with one of three types of tools: code_interpreter, file_search, or function.

id: String

The ID of the tool call.

code_interpreter: Attributes

The Code Interpreter tool call definition.

input: String

The input to the Code Interpreter tool call.

outputs: List[Attributes]

The outputs from the Code Interpreter tool call. Code Interpreter can output one or more items, including text (logs) or images (image). Each of these are represented by a different object type.

logs: String

The text output from the Code Interpreter tool call.

type: String

Always logs.

image: Attributes
file_id: String

The file ID of the image.

type: String

The type of tool call. This is always going to be code_interpreter for this type of tool call.

function: Attributes

The definition of the function that was called.

arguments: String

The arguments passed to the function.

name: String

The name of the function.

output: String

The output of the function. This will be null if the outputs have not been submitted yet.

thread_id: String

The ID of the thread that was run.

type: String

The type of run step, which can be either message_creation or tool_calls.

usage: Attributes

Usage statistics related to the run step. This value will be null while the run step’s status is in_progress.

completion_tokens: Int64

Number of completion tokens used over the course of the run step.

prompt_tokens: Int64

Number of prompt tokens used over the course of the run step.

total_tokens: Int64

Total number of tokens used (prompt + completion).

openai_beta_thread_run_steps

data "openai_beta_thread_run_steps" "example_beta_thread_run_steps" {
  thread_id = "thread_id"
  run_id = "run_id"
  before = "before"
  include = ["step_details.tool_calls[*].file_search.results[*].content"]
}

BetaThreadsMessages

Build Assistants that can call models and use tools.

resource openai_beta_thread_message

required Expand Collapse
thread_id: String
content: String

The text contents of the message.

role: String

The role of the entity that is creating the message. Allowed values include:

  • user: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.
  • assistant: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.
optional Expand Collapse
attachments?: List[Attributes]

A list of files attached to the message, and the tools they should be added to.

file_id?: String

The ID of the file to attach to the message.

tools?: List[Attributes]

The tools to add this file to.

type: String

The type of tool being defined: code_interpreter

metadata?: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

computed Expand Collapse
id: String

The identifier, which can be referenced in API endpoints.

assistant_id: String

If applicable, the ID of the assistant that authored this message.

completed_at: Int64

The Unix timestamp (in seconds) for when the message was completed.

created_at: Int64

The Unix timestamp (in seconds) for when the message was created.

incomplete_at: Int64

The Unix timestamp (in seconds) for when the message was marked as incomplete.

object: String

The object type, which is always thread.message.

run_id: String

The ID of the run associated with the creation of this message. Value is null when messages are created manually using the create message or create thread endpoints.

status: String

The status of the message, which can be either in_progress, incomplete, or completed.

incomplete_details: Attributes

On an incomplete message, details about why the message is incomplete.

reason: String

The reason the message is incomplete.

openai_beta_thread_message

resource "openai_beta_thread_message" "example_beta_thread_message" {
  thread_id = "thread_id"
  content = "string"
  role = "user"
  attachments = [{
    file_id = "file_id"
    tools = [{
      type = "code_interpreter"
    }]
  }]
  metadata = {
    foo = "string"
  }
}

data openai_beta_thread_message

required Expand Collapse
thread_id: String
optional Expand Collapse
message_id?: String
find_one_by?: Attributes
before?: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

order?: String

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

run_id?: String

Filter messages by the run ID that generated them.

computed Expand Collapse
id: String
assistant_id: String

If applicable, the ID of the assistant that authored this message.

completed_at: Int64

The Unix timestamp (in seconds) for when the message was completed.

created_at: Int64

The Unix timestamp (in seconds) for when the message was created.

incomplete_at: Int64

The Unix timestamp (in seconds) for when the message was marked as incomplete.

object: String

The object type, which is always thread.message.

role: String

The entity that produced the message. One of user or assistant.

run_id: String

The ID of the run associated with the creation of this message. Value is null when messages are created manually using the create message or create thread endpoints.

status: String

The status of the message, which can be either in_progress, incomplete, or completed.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

attachments: List[Attributes]

A list of files attached to the message, and the tools they were added to.

file_id: String

The ID of the file to attach to the message.

tools: List[Attributes]

The tools to add this file to.

type: String

The type of tool being defined: code_interpreter

content: List[Attributes]

The content of the message in array of text and/or images.

image_file: Attributes
file_id: String

The File ID of the image in the message content. Set purpose="vision" when uploading the File if you need to later display the file content.

detail: String

Specifies the detail level of the image if specified by the user. low uses fewer tokens, you can opt in to high resolution using high.

type: String

Always image_file.

image_url: Attributes
url: String

The external URL of the image, must be a supported image types: jpeg, jpg, png, gif, webp.

detail: String

Specifies the detail level of the image. low uses fewer tokens, you can opt in to high resolution using high. Default value is auto

text: Attributes
annotations: List[Attributes]
end_index: Int64
file_citation: Attributes
file_id: String

The ID of the specific File the citation is from.

start_index: Int64
text: String

The text in the message content that needs to be replaced.

type: String

Always file_citation.

file_path: Attributes
file_id: String

The ID of the file that was generated.

value: String

The data that makes up the text.

refusal: String
incomplete_details: Attributes

On an incomplete message, details about why the message is incomplete.

reason: String

The reason the message is incomplete.

openai_beta_thread_message

data "openai_beta_thread_message" "example_beta_thread_message" {
  thread_id = "thread_id"
  message_id = "message_id"
}

data openai_beta_thread_messages

required Expand Collapse
thread_id: String
optional Expand Collapse
before?: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

run_id?: String

Filter messages by the run ID that generated them.

order?: String

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

max_items?: Int64

Max items to fetch, default: 1000

computed Expand Collapse
items: List[Attributes]

The items returned by the data source

id: String

The identifier, which can be referenced in API endpoints.

assistant_id: String

If applicable, the ID of the assistant that authored this message.

attachments: List[Attributes]

A list of files attached to the message, and the tools they were added to.

file_id: String

The ID of the file to attach to the message.

tools: List[Attributes]

The tools to add this file to.

type: String

The type of tool being defined: code_interpreter

completed_at: Int64

The Unix timestamp (in seconds) for when the message was completed.

content: List[Attributes]

The content of the message in array of text and/or images.

image_file: Attributes
file_id: String

The File ID of the image in the message content. Set purpose="vision" when uploading the File if you need to later display the file content.

detail: String

Specifies the detail level of the image if specified by the user. low uses fewer tokens, you can opt in to high resolution using high.

type: String

Always image_file.

image_url: Attributes
url: String

The external URL of the image, must be a supported image types: jpeg, jpg, png, gif, webp.

detail: String

Specifies the detail level of the image. low uses fewer tokens, you can opt in to high resolution using high. Default value is auto

text: Attributes
annotations: List[Attributes]
end_index: Int64
file_citation: Attributes
file_id: String

The ID of the specific File the citation is from.

start_index: Int64
text: String

The text in the message content that needs to be replaced.

type: String

Always file_citation.

file_path: Attributes
file_id: String

The ID of the file that was generated.

value: String

The data that makes up the text.

refusal: String
created_at: Int64

The Unix timestamp (in seconds) for when the message was created.

incomplete_at: Int64

The Unix timestamp (in seconds) for when the message was marked as incomplete.

incomplete_details: Attributes

On an incomplete message, details about why the message is incomplete.

reason: String

The reason the message is incomplete.

metadata: Map[String]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

object: String

The object type, which is always thread.message.

role: String

The entity that produced the message. One of user or assistant.

run_id: String

The ID of the run associated with the creation of this message. Value is null when messages are created manually using the create message or create thread endpoints.

status: String

The status of the message, which can be either in_progress, incomplete, or completed.

thread_id: String

The thread ID that this message belongs to.

openai_beta_thread_messages

data "openai_beta_thread_messages" "example_beta_thread_messages" {
  thread_id = "thread_id"
  before = "before"
  run_id = "run_id"
}