What it actually does
OpenAI Operator runs a hosted browser and an agent loop. You ask it to complete a task on a website (book a flight, compare prices, fill a form), and the agent navigates the browser to do it. The pitch is the same as Anthropic Computer Use; the implementations differ.
Anthropic Computer Use
Anthropic ships Computer Use as a model capability rather than a hosted product. The trade-off: you bring your own browser, you have full control over the environment, you also have full responsibility for the security boundary. For controlled internal automation this is the better shape. For consumer-facing tasks, Operator is more polished.
OpenAI vs Anthropic vs Perplexity
Operator is the most polished consumer-facing computer-use product. Anthropic is the most controllable. Perplexity Comet is positioned for research workflows specifically. Pick based on the workflow shape, not the brand.
What is good
- Task completion on simple browser tasks is real. Form-filling, price comparison, booking flows when the site is well-behaved.
- Hosted environment means no local setup. The trade-off is that the task runs in OpenAI's sandbox, not yours.
- Improving fast. The capability gap between 2025 and 2026 is meaningful.
What is broken or surprising
- Production risk on browser automation. A computer-use agent acts on real services with real consequences. Auth flows, payment flows, and irreversible actions need explicit gates. The pattern is the Confidence Gate, applied to action permissions.
- Site-specific brittleness. Sites change. Agents that worked yesterday fail today. Treat the dependency as fragile.
- Cost-per-task variance on long browser sessions can spike when the agent retries through interruptions.
When you would choose it
Pick OpenAI Operator for consumer-facing web tasks where the polish matters. Pick Anthropic Computer Use for internal automation where you control the environment. Pick Perplexity Comet for research workflows. Skip computer-use class agents for any task where an API exists; prefer the API.
Cost at scale
Subscription / usage tiered. Cost is competitive on simple tasks; on long browser sessions the variance grows. Cap session duration and require explicit confirmation before any irreversible action.
Read next

Oliver runs Digital Signet, a research and product studio that operates ~500 production sites with AI agents as the engineering layer. The Digital Signet portfolio is built using a continuous AI-agent build pipeline, one of the largest agent-operated publishing operations on the open web. The handbook draws directly from those deployments: real cost data, real failure modes, real recovery patterns.