When generating model responses or building agents, you can extend capabilities using built‑in tools, function calling, tool search, and remote MCP servers. These enable the model to search the web, retrieve from your files, load deferred tool definitions at runtime, call your own functions, or access third‑party services. Only gpt-5.4 and later models support tool_search.
1
2
3
4
5
6
7
8
9
10
11
12
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.create({
model: "gpt-5",
tools: [
{ type: "web_search" },
],
input: "What was a positive news story from today?",
});
console.log(response.output_text);1
2
3
4
5
6
7
8
9
10
11
12
13
14
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.responses.create({
model: "gpt-4.1",
input: "What is deep research by OpenAI?",
tools: [
{
type: "file_search",
vector_store_ids: ["<vector_store_id>"],
},
],
});
console.log(response);1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import OpenAI from "openai";
const client = new OpenAI();
const crmNamespace = {
type: "namespace",
name: "crm",
description: "CRM tools for customer lookup and order management.",
tools: [
{
type: "function",
name: "get_customer_profile",
description: "Fetch a customer profile by customer ID.",
parameters: {
type: "object",
properties: {
customer_id: { type: "string" },
},
required: ["customer_id"],
additionalProperties: false,
},
},
{
type: "function",
name: "list_open_orders",
description: "List open orders for a customer ID.",
defer_loading: true,
parameters: {
type: "object",
properties: {
customer_id: { type: "string" },
},
required: ["customer_id"],
additionalProperties: false,
},
},
],
};
const response = await client.responses.create({
model: "gpt-5.4",
input: "List open orders for customer CUST-12345.",
tools: [crmNamespace, { type: "tool_search" }],
parallel_tool_calls: false,
});
console.log(response.output);1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import OpenAI from "openai";
const client = new OpenAI();
const tools = [
{
type: "function",
name: "get_weather",
description: "Get current temperature for a given location.",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City and country e.g. Bogotá, Colombia",
},
},
required: ["location"],
additionalProperties: false,
},
strict: true,
},
];
const response = await client.responses.create({
model: "gpt-5",
input: [
{ role: "user", content: "What is the weather like in Paris today?" },
],
tools,
});
console.log(response.output[0].to_json());1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
model: "gpt-5",
tools: [
{
type: "mcp",
server_label: "dmcp",
server_description: "A Dungeons and Dragons MCP server to assist with dice rolling.",
server_url: "https://dmcp-server.deno.dev/sse",
require_approval: "never",
},
],
input: "Roll 2d4+1",
});
console.log(resp.output_text);Available tools
Here’s an overview of the tools available in the OpenAI platform—select one of them for further guidance on usage.
Call custom code to give the model access to additional data and capabilities.
Include data from the Internet in model response generation.
Give the model access to new capabilities via Model Context Protocol (MCP) servers.
Upload and reuse versioned skill bundles in hosted shell environments.
Run shell commands in hosted containers or in your own local runtime.
Create agentic workflows that enable a model to control a computer interface.
Generate or edit images using GPT Image.
Search the contents of uploaded files for context when generating a response.
Dynamically load relevant tools into the model’s context to optimize token usage.
Usage in the API
When making a request to generate a model response, you usually enable tool access by specifying configurations in the tools parameter. Each tool has its own unique configuration requirements—see the Available tools section for detailed instructions.
Based on the provided prompt, the model automatically decides whether to use a configured tool. For instance, if your prompt requests information beyond the model’s training cutoff date and web search is enabled, the model will typically invoke the web search tool to retrieve relevant, up-to-date information.
Some advanced workflows can also load more tool definitions during the interaction. For example, tool search can defer function definitions until the model decides they’re needed.
You can explicitly control or guide this behavior by setting the tool_choice parameter in the API request.
Usage in the Agents SDK
In the Agents SDK, the tool semantics stay the same, but the wiring moves into the agent definition and workflow design rather than a single Responses API request.
- Attach hosted tools, function tools, or hosted MCP tools directly on the agent when one specialist should call them itself.
- Expose a specialist as a tool when a manager should stay in control of the user-facing reply.
- Keep shell, apply patch, and computer-use harnesses in your runtime even when the SDK models the tool decision.
1
2
3
4
5
6
7
8
9
10
11
import { tool } from "@openai/agents";
import { z } from "zod";
const getWeatherTool = tool({
name: "get_weather",
description: "Get the weather for a given city.",
parameters: z.object({ city: z.string() }),
async execute({ city }) {
return `The weather in ${city} is sunny.`;
},
});1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import { Agent } from "@openai/agents";
const summarizer = new Agent({
name: "Summarizer",
instructions: "Generate a concise summary of the supplied text.",
});
const mainAgent = new Agent({
name: "Research assistant",
tools: [
summarizer.asTool({
toolName: "summarize_text",
toolDescription: "Generate a concise summary of the supplied text.",
}),
],
});Use Agent definitions when you are shaping a single specialist, Orchestration and handoffs when tools affect ownership, Guardrails and human review when tools affect approvals, and Integrations and observability when the capability comes from MCP.