Skip to content

Create embeddings

POST/embeddings

Creates an embedding vector representing the input text.

Body ParametersJSONExpand Collapse
input: string or array of string or array of number or array of array of number

Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for all embedding models), cannot be an empty string, and any array must be 2048 dimensions or less. Example Python code for counting tokens. In addition to the per-input token limit, all embedding models enforce a maximum of 300,000 tokens summed across all inputs in a single request.

Accepts one of the following:
String = string

The string that will be turned into an embedding.

Array = array of string

The array of strings that will be turned into an embedding.

Array = array of number

The array of integers that will be turned into an embedding.

Array = array of array of number

The array of arrays containing integers that will be turned into an embedding.

model: string or EmbeddingModel

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

Accepts one of the following:
UnionMember0 = string
EmbeddingModel = "text-embedding-ada-002" or "text-embedding-3-small" or "text-embedding-3-large"
Accepts one of the following:
"text-embedding-ada-002"
"text-embedding-3-small"
"text-embedding-3-large"
dimensions: optional number

The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.

minimum1
encoding_format: optional "float" or "base64"

The format to return the embeddings in. Can be either float or base64.

Accepts one of the following:
"float"
"base64"
user: optional string

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

ReturnsExpand Collapse
CreateEmbeddingResponse = object { data, model, object, usage }
data: array of Embedding { embedding, index, object }

The list of embeddings generated by the model.

embedding: array of number

The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide.

index: number

The index of the embedding in the list of embeddings.

object: "embedding"

The object type, which is always "embedding".

model: string

The name of the model used to generate the embedding.

object: "list"

The object type, which is always "list".

usage: object { prompt_tokens, total_tokens }

The usage information for the request.

prompt_tokens: number

The number of tokens used by the prompt.

total_tokens: number

The total number of tokens used by the request.

Create embeddings

curl https://api.openai.com/v1/embeddings \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer $OPENAI_API_KEY" \
    -d '{
          "input": "The quick brown fox jumped over the lazy dog",
          "model": "text-embedding-3-small",
          "encoding_format": "float",
          "user": "user-1234"
        }'
{
  "data": [
    {
      "embedding": [
        0
      ],
      "index": 0,
      "object": "embedding"
    }
  ],
  "model": "model",
  "object": "list",
  "usage": {
    "prompt_tokens": 0,
    "total_tokens": 0
  }
}
Returns Examples
{
  "data": [
    {
      "embedding": [
        0
      ],
      "index": 0,
      "object": "embedding"
    }
  ],
  "model": "model",
  "object": "list",
  "usage": {
    "prompt_tokens": 0,
    "total_tokens": 0
  }
}