Skip to content

Create embeddings

embeddings.create(**kwargs) -> CreateEmbeddingResponse { data, model, object, usage }
POST/embeddings

Creates an embedding vector representing the input text.

ParametersExpand Collapse
input: String | Array[String] | Array[Integer] | Array[Array[Integer]]

Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for all embedding models), cannot be an empty string, and any array must be 2048 dimensions or less. Example Python code for counting tokens. In addition to the per-input token limit, all embedding models enforce a maximum of 300,000 tokens summed across all inputs in a single request.

Accepts one of the following:
String

The string that will be turned into an embedding.

Array[String]

The array of strings that will be turned into an embedding.

Array[Integer]

The array of integers that will be turned into an embedding.

Array[Array[Integer]]

The array of arrays containing integers that will be turned into an embedding.

model: String | EmbeddingModel

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

Accepts one of the following:
String
EmbeddingModel = :"text-embedding-ada-002" | :"text-embedding-3-small" | :"text-embedding-3-large"
Accepts one of the following:
:"text-embedding-ada-002"
:"text-embedding-3-small"
:"text-embedding-3-large"
dimensions: Integer

The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.

minimum1
encoding_format: :float | :base64

The format to return the embeddings in. Can be either float or base64.

Accepts one of the following:
:float
:base64
user: String

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

ReturnsExpand Collapse
class CreateEmbeddingResponse { data, model, object, usage }
data: Array[Embedding { embedding, index, object } ]

The list of embeddings generated by the model.

embedding: Array[Float]

The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide.

index: Integer

The index of the embedding in the list of embeddings.

object: :embedding

The object type, which is always "embedding".

model: String

The name of the model used to generate the embedding.

object: :list

The object type, which is always "list".

usage: { prompt_tokens, total_tokens}

The usage information for the request.

prompt_tokens: Integer

The number of tokens used by the prompt.

total_tokens: Integer

The total number of tokens used by the request.

Create embeddings

require "openai"

openai = OpenAI::Client.new(api_key: "My API Key")

create_embedding_response = openai.embeddings.create(
  input: "The quick brown fox jumped over the lazy dog",
  model: :"text-embedding-3-small"
)

puts(create_embedding_response)
{
  "data": [
    {
      "embedding": [
        0
      ],
      "index": 0,
      "object": "embedding"
    }
  ],
  "model": "model",
  "object": "list",
  "usage": {
    "prompt_tokens": 0,
    "total_tokens": 0
  }
}
Returns Examples
{
  "data": [
    {
      "embedding": [
        0
      ],
      "index": 0,
      "object": "embedding"
    }
  ],
  "model": "model",
  "object": "list",
  "usage": {
    "prompt_tokens": 0,
    "total_tokens": 0
  }
}