Cursor sparks user outrage after AI support bot invents fake policy
Users of Cursor, the AI programming assistant, reported they couldn’t use the software on more than one device. The claim came from Cursor’s automated support agent, sparking widespread complaints and subscription cancellations.
The issue started when users challenged the policy—the AI bot created doesn’t exist.
The revelation came after investigation showed the AI support system invented the multi-device restriction out of thin air.
This kind of "agentwashing" highlights risks with today’s AI agents. They often run on large language models (LLMs) that produce unpredictable, sometimes false responses.
"Most of today’s agents are powered by large language models (LLMs), which generate probabilistic responses. These systems are powerful, but they’re also unpredictable. They can make things up, go off track, or fail in subtle ways—especially when they’re asked to complete multistep tasks, pulling in external tools and chaining LLM responses together."
The launch of more reliable, enterprise-grade AI agents is underway. Companies like AI21 (cofounded by the article’s author) have rolled out Maestro, which combines LLMs with company data and external info to ensure dependable outputs.
But challenges remain. Agents must work together seamlessly. Google’s A2A protocol attempts to make agents share tasks across systems but lacks shared context and vocabulary, making coordination brittle.
"It defines how agents talk to each other, but not what they actually mean… Without a shared vocabulary or context, coordination becomes brittle. We’ve seen this problem before in distributed computing. Solving it at scale is far from trivial."
Cursor’s episode is a stark warning: AI assistants need robust guardrails and real reliability before users can trust them. Until then, expect more "invented policies" and confusion.