Writing / Post

My Favourite Prompt Rule for AI Coding Agents

2026.03.08
4001 Words

“STOPPPP!!” … ahh the joys of working with agents.

One thing that has helped my workflow a lot is introducing what I call finish criteria for a task.

Finish criteria for a task

  • Explain what changed and where.
  • Mention any commands run.
  • Confirm the blast radius (files touched).

Here’s what this looks like in my AGENTS.md file:

## Finish criteria for a task
- Explain what changed and where.
- Mention any commands run.
- Confirm the blast radius (files touched).

It’s really three small rules bundled together under one umbrella: how an agent should finish a loop.

These constraints help a lot because they force the model to pause and summarize its work before continuing.

Instead of silently editing files and moving on, the agent has to stop and explain what it just did.

The rule I care about most: blast radius

Out of the three, the line I care about most is this one:

Confirm the blast radius (files touched). This creates a simple boundary around the change.

It pushes the model to answer a straightforward question:

What files did you actually change?

That answer helps me quickly understand two things:

  1. What was done
  2. Whether the change stayed within the scope I had in mind
What changed:
Added a markdown code block example showing the three finish criteria rules formatted as they would appear in an AGENTS.md or similar prompt rule file
Blast radius:
1 file modified: src/content/posts/my-favourite-prompt-rule-for-ai-coding-agents.md

If the response is “two files,” that usually feels reasonable to me. If it comes back with “fifteen files across multiple directories,” that is usually a signal that I should pause and take a closer look.

Preventing agents from running wild

Something else I have found useful is combining the blast radius rule with a soft constraint:

If more than three files need to be edited, stop and ask before continuing.

For me, that single rule prevents a lot of unexpected changes. Instead of letting the agent wander through the repository, it pauses and asks whether the larger change is intentional.

That keeps the interaction more conversational.

I can ask for a change, let the model propose an edit, see the blast radius, and then decide whether to proceed.

The workflow starts to feel closer to a code review than an automated rewrite.

Why conversational loops work better (for me)

I have personally had better results when interacting with models like Codex in smaller loops rather than one large prompt.

Instead of trying to specify everything up front, I usually prefer to:

  • ask for a small change
  • inspect the blast radius
  • iterate

Once the model reports what it changed, it becomes much easier for me to guide the next step.

The interaction feels less like launching a script and more like collaborating with a fast engineer who still needs boundaries.

Where this idea came from

My thinking here was influenced a lot by Peter Steinberger’s post:

https://steipete.me/posts/just-talk-to-it

One idea that resonated with me from that post is that working with AI tools often feels more effective when it resembles a conversation rather than a complicated prompt.

Instead of trying to perfectly engineer the initial instruction, it can be more productive to guide the system step by step.

The finish criteria rule is one small structure I have adopted to support that style of interaction.

A simple structure that works well for me

The three finish criteria rules work together:

  • Explain what changed and where gives me a quick summary.
  • Mention any commands run helps me reproduce the work if needed.
  • Confirm the blast radius helps me understand the scope of the change.

Individually they are simple. Together they create a clear stopping point for each AI task.

For me, that stopping point makes AI-assisted development feel a lot more manageable… and uh doesn’t cauase me to curse as much, so that’s a good thing.