The apply_patch tool lets GPT-5.1 create, update, and delete files in your codebase using structured diffs. Instead of just suggesting edits, the model emits patch operations that your application applies and then reports back on, enabling iterative, multi-step code editing workflows.
When to use
Some common scenarios where you would use apply_patch:
- Multi-file refactors – Rename symbols, extract helpers, or reorganize modules across many files at once.
- Bug fixes – Have the model both diagnose issues and emit precise patches.
- Tests & docs generation – Create new test files, fixtures, and documentation alongside code changes.
- Migrations & mechanical edits – Apply repetitive, structured updates (API migrations, type annotations, formatting fixes, etc.).
If you can describe your repo and desired change in text, apply_patch can usually generate the corresponding diffs.
Use apply patch tool with Responses API
At a high level, using apply_patch with the Responses API looks like this:
- Call the Responses API with the
apply_patchtool- Provide the model with context about available files (or a summary) in your
input, or give the model tools for exploring your file system. - Enable the tool with
tools=[{"type": "apply_patch"}].
- Provide the model with context about available files (or a summary) in your
- Let the model return one or more patch operations
- The Response output includes one or more
apply_patch_callobjects. - Each call describes a single file operation: create, update, or delete.
- The Response output includes one or more
- Apply patches in your environment
- Run a patch harness or script that:
- Interprets the
operationdiff for eachapply_patch_call. - Applies the patch to your working directory or repo.
- Records whether each patch succeeded and any logs or error messages.
- Interprets the
- Run a patch harness or script that:
- Report patch results back to the model
- Call the Responses API again, either with
previous_response_idor by passing back your conversation items intoinput. - Include an
apply_patch_call_outputevent for eachcall_id, with astatusand optionaloutputstring. - Keep
tools=[{"type": "apply_patch"}]so the model can continue editing if needed.
- Call the Responses API again, either with
- Let the model continue or explain changes
- The model may issue more
apply_patch_calloperations, or - Provide a human-facing explanation of what it changed and why.
- The model may issue more
Example: Renaming a function with Apply Patch Tool
Step 1: Ask the model to plan and emit patches
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
from openai import OpenAI
client = OpenAI()
# For brevity, we are including file context in the example input.
# Most agentic use cases should instead equip the model with tools
# for exploring file system state.
RESPONSE_INPUT = """
The user has the following files:
<BEGIN_FILES>
===== lib/fib.py
def fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
===== run.py
from lib.fib import fib
def main():
print(fib(42))
<END_FILES>
You are a helpful coding assistant that should assist the user with whatever they
ask.
User query:
Help me rename the fib() function to fibonacci()
"""
response = client.responses.create(
model="gpt-5.1",
input=RESPONSE_INPUT,
tools=[{"type": "apply_patch"}],
)
# response.output may contain multiple apply_patch_call entries, e.g.:
# - update lib/fib.py
# - update run.py
patch_calls = [
item for item in response.output
if item["type"] == "apply_patch_call"
]Example apply_patch_call object
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"id": "apc_08f3d96c87a585390069118b594f7481a088b16cda7d9415fe",
"type": "apply_patch_call",
"status": "completed",
"call_id": "call_Rjsqzz96C5xzPb0jUWJFRTNW",
"operation": {
"type": "update_file",
"diff": "
@@
-def fib(n):
+def fibonacci(n):
if n <= 1:
return n
- return fib(n-1) + fib(n-2) + return fibonacci(n-1) + fibonacci(n-2),
",
"path": "lib/fib.py"
}
}Step 2: Apply the patch and send results back
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
from apply_patch_harness import apply_operation # your implementation
results = []
for call in patch_calls:
op = call["operation"]
success, maybe_log_output = apply_operation(op)
results.append({
"type": "apply_patch_call_output",
"call_id": call["call_id"],
"status": "completed" if success else "failed",
"output": maybe_log_output,
})
followup = client.responses.create(
model="gpt-5.1",
previous_response_id=response.id,
input=results,
tools=[{"type": "apply_patch"}],
)If a patch fails (for example, file not found), set status: "failed" and include a helpful output string so the model can recover:
1
2
3
4
5
6
{
"type": "apply_patch_call_output",
"call_id": "call_cNWm41dB3RyQcLNOVTIPBWZU",
"status": "failed",
"output": "Could not apply patch to lib/foo.py — file not found on disk"
}Apply patch operations
| Operation Type | Purpose | Payload |
|---|---|---|
create_file | Create a new file at path. | diff is a V4A diff representing the full file contents. |
update_file | Modify an existing file at path. | diff is a V4A diff with additions, deletions, or replacements. |
delete_file | Remove a file at path. | No diff; delete the file entirely. |
Your patch harness is responsible for interpreting the V4A diff format and applying changes. For reference implementations, see the Python Agents SDK or TypeScript Agents SDK code.
Implementing the patch harness
When using the apply_patch tool, you don’t provide an input schema; the model knows how to construct operation objects. Your job is to:
- Parse operations from the Response
- Scan the Response for items with
type: "apply_patch_call". - For each call, inspect
operation.type,operation.path, and any potentialdiff.
- Scan the Response for items with
- Apply file operations
- For
create_fileandupdate_file, apply the V4A diff to the file system or in-memory workspace. - For
delete_file, remove the file atpath. - Record whether each operation succeeded and any logs or error messages.
- For
- Return
apply_patch_call_outputevents- For each
call_id, emit exactly oneapply_patch_call_outputevent with:status: "completed"if the operation was applied successfully.status: "failed"if you encountered an error (include a short human-readableoutputstring).
- For each
Safety and robustness
- Path validation: Prevent directory traversal and restrict edits to allowed directories.
- Backups: Consider backing up files (or working in a scratch copy) before applying patches.
- Error handling: Always return a
failedstatus with an informativeoutputstring when patches cannot be applied. - Atomicity: Decide whether you want “all-or-nothing” semantics (rollback if any patch fails) or per-file success/failure.
Use the apply patch tool with the Agents SDK
Alternatively, you can use the Agents SDK to use the apply patch tool. You’ll still have to implement the harness that handles the actual file operations but you can use the applyDiff function to hande the diff processing.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import { applyDiff, Agent, run, applyPatchTool, Editor } from "@openai/agents";
class WorkspaceEditor implements Editor {
async createFile(operation) {
// convert the diff to the file content
const content = applyDiff("", operation.diff, "create");
// write the file content to the file system
return { status: "completed", output: `Created ${operation.path}` };
}
async updateFile(operation) {
// read the file content from the file system
const current = "";
// convert the diff to the new file content
const newContent = applyDiff(current, operation.diff);
// write the updated file content to the file system
return { status: "completed", output: `Updated ${operation.path}` };
}
async deleteFile(operation) {
// delete the file from the file system
return { status: "completed", output: `Deleted ${operation.path}` };
}
}
const editor = new WorkspaceEditor();
const agent = new Agent({
name: "Patch Assistant",
model: "gpt-5.1",
instructions: "You can edit files inside the /tmp directory using the apply_patch tool.",
tools: [
applyPatchTool({
editor,
// could also be a function for you to determine if approval is needed
needsApproval: true,
onApproval: async (_ctx, _approvalItem) => {
// create your own approval logic
return { approve: true };
},
}),
],
});
const result = await run(
agent,
"Create tasks.md with a shopping checklist of 5 entries."
);
console.log(`\nFinal response:\n${result.finalOutput}`);You can find full working examples on GitHub.
Example of how to use the apply patch tool with the Agents SDK in TypeScript
Example of how to use the apply patch tool with the Agents SDK in Python
Handling common errors
Use status: "failed" plus a clear output message to help the model recover.
1
2
3
4
5
6
{
"type": "apply_patch_call_output",
"call_id": "call_abc",
"status": "failed",
"output": "Error: File not found at path 'lib/baz.py'"
}1
2
3
4
5
6
{
"type": "apply_patch_call_output",
"call_id": "call_abc",
"status": "failed",
"output": "Error: Invalid Context:\n@@ def fib(n):"
}The model can then adjust future diffs (for example, by re-reading a file in your prompt or simplifying a change) based on these error messages.
Best practices
- Give clear file context
- When you call the Responses API, include either an inline snapshot of your files (as in the example), or give the model tools for exploring your filesystem (like the
shelltool).
- When you call the Responses API, include either an inline snapshot of your files (as in the example), or give the model tools for exploring your filesystem (like the
- Consider using with the
shelltool- When used in conjunction with the
shelltool, the model can explore file system directories, read files, and grep for keywords, enabling agentic file discovery and editing.
- When used in conjunction with the
- Encourage small, focused diffs
- In your system instructions, nudge the model toward minimal, targeted edits rather than huge rewrites.
- Make sure changes apply cleanly
- After a series of patches, run your tests or linters and share failures back in the next
inputso the model can fix them.
- After a series of patches, run your tests or linters and share failures back in the next
Usage notes
| API Availability | Supported models |
|---|---|
| GPT-5.1 |