Skip to content
Primary navigation

List runs

Deprecated
beta.threads.runs.list(thread_id, **kwargs) -> CursorPage<Run { id, assistant_id, cancelled_at, 24 more } >
GET/threads/{thread_id}/runs

Returns a list of runs belonging to a thread.

ParametersExpand Collapse
thread_id: String
after: String

A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.

before: String

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, starting with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

limit: Integer

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.

order: :asc | :desc

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

Accepts one of the following:
:asc
:desc
ReturnsExpand Collapse
class Run { id, assistant_id, cancelled_at, 24 more }

Represents an execution run on a thread.

id: String

The identifier, which can be referenced in API endpoints.

assistant_id: String

The ID of the assistant used for execution of this run.

cancelled_at: Integer

The Unix timestamp (in seconds) for when the run was cancelled.

completed_at: Integer

The Unix timestamp (in seconds) for when the run was completed.

created_at: Integer

The Unix timestamp (in seconds) for when the run was created.

expires_at: Integer

The Unix timestamp (in seconds) for when the run will expire.

failed_at: Integer

The Unix timestamp (in seconds) for when the run failed.

incomplete_details: { reason}

Details on why the run is incomplete. Will be null if the run is not incomplete.

reason: :max_completion_tokens | :max_prompt_tokens

The reason why the run is incomplete. This will point to which specific token limit was reached over the course of the run.

Accepts one of the following:
:max_completion_tokens
:max_prompt_tokens
instructions: String

The instructions that the assistant used for this run.

last_error: { code, message}

The last error associated with this run. Will be null if there are no errors.

code: :server_error | :rate_limit_exceeded | :invalid_prompt

One of server_error, rate_limit_exceeded, or invalid_prompt.

Accepts one of the following:
:server_error
:rate_limit_exceeded
:invalid_prompt
message: String

A human-readable description of the error.

max_completion_tokens: Integer

The maximum number of completion tokens specified to have been used over the course of the run.

minimum256
max_prompt_tokens: Integer

The maximum number of prompt tokens specified to have been used over the course of the run.

minimum256
metadata: Metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

model: String

The model that the assistant used for this run.

object: :"thread.run"

The object type, which is always thread.run.

parallel_tool_calls: bool

Whether to enable parallel function calling during tool use.

required_action: { submit_tool_outputs, type}

Details on the action required to continue the run. Will be null if no action is required.

submit_tool_outputs: { tool_calls}

Details on the tool outputs needed for this run to continue.

tool_calls: Array[RequiredActionFunctionToolCall { id, function, type } ]

A list of the relevant tool calls.

id: String

The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint.

function: { arguments, name}

The function definition.

arguments: String

The arguments that the model expects you to pass to the function.

name: String

The name of the function.

type: :function

The type of tool call the output is required for. For now, this is always function.

type: :submit_tool_outputs

For now, this is always submit_tool_outputs.

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

Accepts one of the following:
AssistantResponseFormatOption = :auto

auto is the default value

class ResponseFormatText { type }

Default response format. Used to generate text responses.

type: :text

The type of response format being defined. Always text.

class ResponseFormatJSONObject { type }

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.

type: :json_object

The type of response format being defined. Always json_object.

class ResponseFormatJSONSchema { json_schema, type }

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.

json_schema: { name, description, schema, strict}

Structured Outputs configuration options, including a JSON Schema.

name: String

The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: String

A description of what the response format is for, used by the model to determine how to respond in the format.

schema: Hash[Symbol, untyped]

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.

strict: bool

Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true. To learn more, read the Structured Outputs guide.

type: :json_schema

The type of response format being defined. Always json_schema.

started_at: Integer

The Unix timestamp (in seconds) for when the run was started.

status: RunStatus

The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.

Accepts one of the following:
:queued
:in_progress
:requires_action
:cancelling
:cancelled
:failed
:completed
:incomplete
:expired
thread_id: String

The ID of the thread that was executed on as a part of this run.

Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

Accepts one of the following:
Auto = :none | :auto | :required

none means the model will not call any tools and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user.

Accepts one of the following:
:none
:auto
:required
class AssistantToolChoice { type, function }

Specifies a tool the model should use. Use to force the model to call a specific tool.

type: :function | :code_interpreter | :file_search

The type of the tool. If type is function, the function name must be set

Accepts one of the following:
:function
:code_interpreter
:file_search
function: AssistantToolChoiceFunction { name }
name: String

The name of the function to call.

tools: Array[AssistantTool]

The list of tools that the assistant used for this run.

Accepts one of the following:
class CodeInterpreterTool { type }
type: :code_interpreter

The type of tool being defined: code_interpreter

class FileSearchTool { type, file_search }
type: :file_search

The type of tool being defined: file_search

Accepts one of the following:
class FunctionTool { function, type }
function: FunctionDefinition { name, description, parameters, strict }
name: String

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

description: String

A description of what the function does, used by the model to choose when and how to call the function.

parameters: FunctionParameters

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

strict: bool

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

type: :function

The type of tool being defined: function

truncation_strategy: { type, last_messages}

Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run.

type: :auto | :last_messages

The truncation strategy to use for the thread. The default is auto. If set to last_messages, the thread will be truncated to the n most recent messages in the thread. When set to auto, messages in the middle of the thread will be dropped to fit the context length of the model, max_prompt_tokens.

Accepts one of the following:
:auto
:last_messages
last_messages: Integer

The number of most recent messages from the thread when constructing the context for the run.

minimum1
usage: { completion_tokens, prompt_tokens, total_tokens}

Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).

completion_tokens: Integer

Number of completion tokens used over the course of the run.

prompt_tokens: Integer

Number of prompt tokens used over the course of the run.

total_tokens: Integer

Total number of tokens used (prompt + completion).

temperature: Float

The sampling temperature used for this run. If not set, defaults to 1.

top_p: Float

The nucleus sampling value used for this run. If not set, defaults to 1.

List runs

require "openai"

openai = OpenAI::Client.new(api_key: "My API Key")

page = openai.beta.threads.runs.list("thread_id")

puts(page)
{
  "object": "list",
  "data": [
    {
      "id": "run_abc123",
      "object": "thread.run",
      "created_at": 1699075072,
      "assistant_id": "asst_abc123",
      "thread_id": "thread_abc123",
      "status": "completed",
      "started_at": 1699075072,
      "expires_at": null,
      "cancelled_at": null,
      "failed_at": null,
      "completed_at": 1699075073,
      "last_error": null,
      "model": "gpt-4o",
      "instructions": null,
      "incomplete_details": null,
      "tools": [
        {
          "type": "code_interpreter"
        }
      ],
      "tool_resources": {
        "code_interpreter": {
          "file_ids": [
            "file-abc123",
            "file-abc456"
          ]
        }
      },
      "metadata": {},
      "usage": {
        "prompt_tokens": 123,
        "completion_tokens": 456,
        "total_tokens": 579
      },
      "temperature": 1.0,
      "top_p": 1.0,
      "max_prompt_tokens": 1000,
      "max_completion_tokens": 1000,
      "truncation_strategy": {
        "type": "auto",
        "last_messages": null
      },
      "response_format": "auto",
      "tool_choice": "auto",
      "parallel_tool_calls": true
    },
    {
      "id": "run_abc456",
      "object": "thread.run",
      "created_at": 1699063290,
      "assistant_id": "asst_abc123",
      "thread_id": "thread_abc123",
      "status": "completed",
      "started_at": 1699063290,
      "expires_at": null,
      "cancelled_at": null,
      "failed_at": null,
      "completed_at": 1699063291,
      "last_error": null,
      "model": "gpt-4o",
      "instructions": null,
      "incomplete_details": null,
      "tools": [
        {
          "type": "code_interpreter"
        }
      ],
      "tool_resources": {
        "code_interpreter": {
          "file_ids": [
            "file-abc123",
            "file-abc456"
          ]
        }
      },
      "metadata": {},
      "usage": {
        "prompt_tokens": 123,
        "completion_tokens": 456,
        "total_tokens": 579
      },
      "temperature": 1.0,
      "top_p": 1.0,
      "max_prompt_tokens": 1000,
      "max_completion_tokens": 1000,
      "truncation_strategy": {
        "type": "auto",
        "last_messages": null
      },
      "response_format": "auto",
      "tool_choice": "auto",
      "parallel_tool_calls": true
    }
  ],
  "first_id": "run_abc123",
  "last_id": "run_abc456",
  "has_more": false
}
Returns Examples
{
  "object": "list",
  "data": [
    {
      "id": "run_abc123",
      "object": "thread.run",
      "created_at": 1699075072,
      "assistant_id": "asst_abc123",
      "thread_id": "thread_abc123",
      "status": "completed",
      "started_at": 1699075072,
      "expires_at": null,
      "cancelled_at": null,
      "failed_at": null,
      "completed_at": 1699075073,
      "last_error": null,
      "model": "gpt-4o",
      "instructions": null,
      "incomplete_details": null,
      "tools": [
        {
          "type": "code_interpreter"
        }
      ],
      "tool_resources": {
        "code_interpreter": {
          "file_ids": [
            "file-abc123",
            "file-abc456"
          ]
        }
      },
      "metadata": {},
      "usage": {
        "prompt_tokens": 123,
        "completion_tokens": 456,
        "total_tokens": 579
      },
      "temperature": 1.0,
      "top_p": 1.0,
      "max_prompt_tokens": 1000,
      "max_completion_tokens": 1000,
      "truncation_strategy": {
        "type": "auto",
        "last_messages": null
      },
      "response_format": "auto",
      "tool_choice": "auto",
      "parallel_tool_calls": true
    },
    {
      "id": "run_abc456",
      "object": "thread.run",
      "created_at": 1699063290,
      "assistant_id": "asst_abc123",
      "thread_id": "thread_abc123",
      "status": "completed",
      "started_at": 1699063290,
      "expires_at": null,
      "cancelled_at": null,
      "failed_at": null,
      "completed_at": 1699063291,
      "last_error": null,
      "model": "gpt-4o",
      "instructions": null,
      "incomplete_details": null,
      "tools": [
        {
          "type": "code_interpreter"
        }
      ],
      "tool_resources": {
        "code_interpreter": {
          "file_ids": [
            "file-abc123",
            "file-abc456"
          ]
        }
      },
      "metadata": {},
      "usage": {
        "prompt_tokens": 123,
        "completion_tokens": 456,
        "total_tokens": 579
      },
      "temperature": 1.0,
      "top_p": 1.0,
      "max_prompt_tokens": 1000,
      "max_completion_tokens": 1000,
      "truncation_strategy": {
        "type": "auto",
        "last_messages": null
      },
      "response_format": "auto",
      "tool_choice": "auto",
      "parallel_tool_calls": true
    }
  ],
  "first_id": "run_abc123",
  "last_id": "run_abc456",
  "has_more": false
}