What it actually does
Cursor is a VS Code fork that puts AI editing primitives directly in the editor. The headline features are Composer (multi-file edits driven by a single prompt), the Tab system (predictive autocomplete that often guesses three or four edits ahead), and inline chat (per-file Q&A and edits without leaving the file).
Most reviews on the SERP are written by an engineer who tried Cursor for a week. We have run it across a team that ships. The team usage signal differs in important ways from the solo signal: tab-completion that works for one person fails for another because both are trained on different code styles.
What is good
- Composer for multi-file refactors that fit in the editor context. Faster than Claude Code for edits 5 files or fewer because there is no terminal hop.
- Tab acceptance rate is high. Across our team we accept roughly 35-45% of tab suggestions, which is the highest of the three tab-completion systems we have measured.
- Inline chat for "explain this function" tasks is the workflow most engineers actually use, more than the headline features. The integration is the value.
What is broken or surprising
- Composer drifts on tasks larger than ~10 files. Past that ceiling, Claude Code in the terminal is more reliable. Cursor knows this; the "agent mode" ships with longer-context affordances but the experience is different from Claude Code's and we still hand off the long jobs.
- Tab quality varies by language. Strongest in TypeScript and Python. Weaker in Go, in our experience. Plan accordingly.
- Pricing creep at the team tier. The model-cost passthrough is real and should be budgeted as a variable, not fixed, line.
When you would choose it
Pick Cursor if your work is in-IDE and your team accepts a per-seat tool. Pick it for the tab-completion alone if your language is well-supported. The honest comparison rule against Claude Code lives at claude-code-vs-cursor; against Copilot at cursor-vs-github-copilot.
Skip Cursor if your editor habit is fixed elsewhere. The fork is good but it is still a fork; the editor switch is the cost.
Cost at scale
Subscription per seat plus model passthrough. At our team scale, the model passthrough is roughly 60% of the seat cost on a busy month. Cap by enabling per-seat budgets in the team admin; the cost-cliff failure mode lives in the passthrough, not the seat.
Composer's cost-per-edit is, in our measurement, roughly competitive with Claude Code on equivalent multi-file tasks. The difference is the failure-mode profile: Cursor fails in shorter, more visible bursts; Claude Code fails in longer silent drifts.
Read next

Oliver runs Digital Signet, a research and product studio that operates ~500 production sites with AI agents as the engineering layer. The Digital Signet portfolio is built using a continuous AI-agent build pipeline, one of the largest agent-operated publishing operations on the open web. The handbook draws directly from those deployments: real cost data, real failure modes, real recovery patterns.