Token counting lets you determine how many input tokens a request will use before you send it to the model. Use it to:
- Optimize prompts to fit within context limits
- Estimate costs before making API calls
- Route requests based on size (e.g., smaller prompts to faster models)
- Avoid surprises with images and files—no more character-based estimation
The input token count endpoint accepts the same input format as the Responses API. Pass text, messages, images, files, tools, or conversations—the API returns the exact count the model will receive.
Why use the token counting API?
Local tokenizers like tiktoken work for plain text, but they have limitations:
- Images and files are not supported—estimates like
characters / 4are inaccurate - Tools and schemas add tokens that are hard to count locally
- Model-specific behavior can change tokenization (e.g., reasoning, caching)
The token counting API handles all of these. Use the same payload you would send to responses.create and get an accurate count. Then plug the result into your message validation or cost estimation flow.
Count tokens in basic messages
1
2
3
4
5
6
7
8
9
from openai import OpenAI
client = OpenAI()
response = client.responses.input_tokens.count(
model="gpt-5",
input="Tell me a joke."
)
print(response.input_tokens)Count tokens in conversations
1
2
3
4
5
6
7
8
9
10
11
12
13
from openai import OpenAI
client = OpenAI()
response = client.responses.input_tokens.count(
model="gpt-5",
input=[
{"role": "user", "content": "What is 2 + 2?"},
{"role": "assistant", "content": "2 + 2 equals 4."},
{"role": "user", "content": "What about 3 + 3?"},
],
)
print(response.input_tokens)Count tokens with instructions
1
2
3
4
5
6
7
8
9
10
from openai import OpenAI
client = OpenAI()
response = client.responses.input_tokens.count(
model="gpt-5",
instructions="You are a helpful assistant that explains concepts simply.",
input="Explain quantum computing in one sentence.",
)
print(response.input_tokens)Count tokens with images
Images consume tokens based on size and detail level. The token counting API returns the exact count—no guesswork.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
from openai import OpenAI
client = OpenAI()
# Use file_id from uploaded file, or image_url for a URL
response = client.responses.input_tokens.count(
model="gpt-5",
input=[
{
"role": "user",
"content": [
{"type": "input_image", "image_url": "https://example.com/chart.png"},
{"type": "input_text", "text": "Summarize this chart."},
],
}
],
)
print(response.input_tokens)You can use file_id (from the Files API) or image_url (a URL or base64 data URL). See images and vision for details.
Count tokens with tools
Tool definitions (function schemas, MCP servers, etc.) add tokens to the context. Count them together with your input:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
from openai import OpenAI
client = OpenAI()
response = client.responses.input_tokens.count(
model="gpt-5",
tools=[
{
"type": "function",
"name": "get_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
}
],
input="What is the weather in San Francisco?",
)
print(response.input_tokens)Count tokens with files
File inputs—currently PDFs—are supported. Pass file_id, file_url, or file_data as you would for responses.create. The token count reflects the model’s full processed input.
API reference
For full parameters and response shape, see the Count input tokens API reference. The endpoint is:
POST /v1/responses/input_tokensThe response includes input_tokens (integer) and object: "response.input_tokens".