Building Effective Agents
OPERATOR NOTE · 8 APRIL 2026

The day our evaluator optimised itself into a loop

14 iterations. Iteration 14 was no better than iteration 3. The marginal-improvement detector is the fix.

Oliver Wakefield-SmithBy Oliver Wakefield-Smith, Digital Signet
Last verified 8 April 2026

What happened

Our evaluator-optimiser pattern in one site decided the optimiser's output was always-not-quite-good-enough and kept asking for revisions. The agent ran 14 iterations before hitting the budget cap. The output of iteration 14 was no better than iteration 3.

Why it happened

The evaluator was running on the same model with a similar prompt context to the optimiser. It had nothing genuinely new to say after iteration 3. It said something anyway, because we asked it to, and the loop continued.

The fix, two layers

One: a hard cap on iterations (we now use 5). Two: a marginal-improvement detector. If iterations N and N+1 differ by less than X tokens of edit distance, accept iteration N+1 and stop. The detector is the more important of the two; the cap is the safety net.

The numbers

Median iterations before the fix: 3. Median iterations after the fix: 2. P99 before: hitting the cap (10 then). P99 after: 4. The cost reduction was material on this task class, but the more useful change was that the cap stopped firing.

The lesson

An evaluator-optimiser pattern with no marginal-improvement detector will, given enough runs, find a task on which it loops. The Loop is named on the Failure Pyramid and the fix lives in the evaluator-optimiser pattern essay. If you are running this pattern without a detector, you are running it incompletely.

Read next

Evaluator-Optimiser pattern

The pattern this Note touches.

Failure Pyramid

The Loop, named.

Oliver Wakefield-Smith, Founder of Digital Signet
ABOUT THE AUTHOR
Oliver Wakefield-Smith
Founder, Digital Signet

Oliver runs Digital Signet, a research and product studio that operates ~500 production sites with AI agents as the engineering layer. The Digital Signet portfolio is built using a continuous AI-agent build pipeline, one of the largest agent-operated publishing operations on the open web. The handbook draws directly from those deployments: real cost data, real failure modes, real recovery patterns.