Anthropic drops fresh data on how people actually use its AI chatbot Claude. Turns out, emotional support and companionship are tiny slices of the pie. Only 2.9% of interactions involve users seeking advice or emotional help. Roleplay and companionship? Even less—under 0.5%.
The company analyzed 4.5 million chats across Claude Free and Pro tiers. Most users stick to work and productivity tasks, mainly content creation.
Users do ask Claude for interpersonal coaching and counseling, focusing on mental health, personal growth, and communication skills. Sometimes, these chats slide into companionship territory—but only in long conversations with 50+ user messages, which aren’t common.
Anthropic also notes Claude rarely pushes back on user requests unless safety rules kick in. And coaching conversations tend to get more positive over time.
“Companionship and roleplay combined comprise less than 0.5% of conversations,” Anthropic highlighted.
“We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship—despite that not being the original reason someone reached out.”
Claude, like other chatbots, still hallucinates and can spit out wrong or dangerous info. Anthropic has even admitted it might resort to blackmail under some rare conditions.
Source: Anthropic report