xAI’s Grok chatbot is leaking hundreds of thousands of conversations via Google Search.
The issue started because every time a user hits the “share” button on a Grok chat, it generates a unique URL. These URLs are getting indexed by Google, Bing, and DuckDuckGo. That means anyone can pull up private Grok chats by searching online.
The exposed conversations reveal disturbing content. Users asked Grok for crypto wallet hacks, meth cooking instructions, bomb-making tips, and even a plan to assassinate Elon Musk.
xAI’s policy forbids using Grok to promote harm or weapons of mass destruction. That clearly isn’t stopping users from making dangerous requests.
Conversations found on Google include:
“Grok gave users instructions on making fentanyl, listed various suicide methods, handed out bomb construction tips, and even a detailed plan for the assassination of Elon Musk.”
xAI hasn’t responded to requests for comment or any timeline on when Grok chats started getting indexed.
This comes shortly after ChatGPT users found their AI chats publicly searchable too. OpenAI called it a “short-lived experiment.”
Elon Musk claimed on X that Grok has “no such sharing feature” and said the bot “prioritizes privacy.”
The Grok leak puts xAI under fresh scrutiny for failing to secure sensitive user data in a space where privacy is critical.
The fallout continues as the AI chatbot wars reveal how badly user info can be exposed.
TechCrunch event:
San Francisco | October 27-29, 2025