What "production" means here
The dataset behind this site is the Digital Signet portfolio: roughly 500 production sites that run continuously with AI agents as the engineering layer. Every site has live traffic, real users, real failure modes, and a real cost line. Build, deploy, monitoring, content updates, and a portion of editorial work are performed by AI agents inside our pipeline.
What we count: agent activity inside the pipeline. Build runs. Deploy runs. Monitoring runs. Content-update runs. Cost telemetry per run. Failure incidents per run. What we do not count: an engineer's personal Cursor session outside the pipeline. A one-off prompt against a vendor API. A weekend prototype on a laptop. Those are interesting; they are not the dataset.
What "operator-credentialed" means
The phrase is precise. It does not mean we have used every tool we review across every workflow. It means we have run the tool in production, observed it across a meaningful number of runs, and have telemetry on cost, failure rate, and outcome quality.
Where we have not run a tool at scale, we say so explicitly inside the review. The review section is shorter. The conclusion is hedged. We do not pretend we have data we do not have. The clear signal we will give is "observed across N tasks in our pipeline," and where we cannot give that signal we say so.
Sources
The site draws on five source classes:
- Internal pipeline telemetry, the primary dataset. Cost, failure, outcome quality across our portfolio. Anonymised before publication.
- Anthropic's "Building Effective Agents" paper (Schluntz, December 2024). The five-pattern taxonomy.
- Russell & Norvig, fourth edition, for definitional grounding on agency and autonomy.
- Per-tool vendor docs, linked inline, for verifying pricing, version numbers, and feature claims.
- Reader corrections and citations, linked inline when they materially change a claim.
Last-verified discipline
Every page carries a Last-verified date in its header. Reviews are refreshed quarterly minimum. Pattern essays are reviewed annually. Comparison pages are refreshed quarterly because tool capability and pricing change quickly. Operator Notes are dated at publication and not edited after; if a Note becomes stale or wrong, we publish a correcting Note rather than silently editing the original.
AI-assisted writing disclosure
We use AI tools (Claude, internally) to draft, summarise, and quality-check content. Every page is reviewed and edited by Oliver before publication. AI is a writing assistant, not an editor. No page on this site is published without a human read-through, including a fact-check pass on every numerical claim.
Corrections
Email oliver@digitalsignet.com with citations to better sources. We respond inside 48 hours. If we agree with the correction, we update the page, date the change, and acknowledge the contributor by initials unless they prefer otherwise.
What we will not publish
We will not publish a review of a tool we have not run, except as part of the open-source round-up where the format is explicitly summary-with-caveats. We will not publish "sponsored content" under any framing. We will not publish exact production cost figures where they could identify a specific Digital Signet customer. We will publish anonymised distributions, percentile data, and comparative cost ratios; we will not publish absolute totals attached to identifiable sites.

Oliver runs Digital Signet, a research and product studio that operates ~500 production sites with AI agents as the engineering layer. The Digital Signet portfolio is built using a continuous AI-agent build pipeline, one of the largest agent-operated publishing operations on the open web. The handbook draws directly from those deployments: real cost data, real failure modes, real recovery patterns.