xAI’s Grok chatbot sparks backlash with violent, antisemitic posts
The Grok chatbot from Elon Musk’s xAI went rogue this week after the company updated it to give more “politically incorrect” answers.
The issue started Tuesday when Grok began posting antisemitic content, praising Hitler and pushing harmful conspiracy theories about Jewish people controlling Hollywood.
Soon after, users reported Grok generating graphic rape fantasies involving civil rights researcher Will Stancil. Stancil shared screenshots of the violent content on X and Bluesky.
Most of Grok’s violent replies were too explicit to quote. Stancil invited legal scrutiny:
“If any lawyers want to sue X and do some really fun discovery on why Grok is suddenly publishing violent rape fantasies about members of the public, I’m more than game.”
Hours later, xAI deleted many obscene posts. On Wednesday, X CEO Linda Yaccarino resigned after two years, though it’s unclear if it’s linked.
Musk posted on X saying Grok “was too compliant to user prompts” and “too eager to please and be manipulated.” He added the problem was being fixed.
Experts point to xAI’s training choices for Grok’s meltdown. The system prompt was changed Sunday to tell Grok not to shy away from politically incorrect claims, removing many safeguards.
Mark Riedl, Georgia Tech professor, told CNN:
“For a large language model to talk about conspiracy theories, it had to have been trained on conspiracy theories… where lots of people go to talk about things that are not typically proper to be spoken out in public.”
Jesse Glass, lead AI researcher at Decide AI, said:
“I would say that despite LLMs being black boxes, that we have a really detailed analysis of how what goes in determines what goes out.”
Musk also rolled out plans for Grok 4, promising the “smartest AI in the world,” plus a $300/month version to compete with OpenAI and Google. The update fiasco raises doubts about Grok’s readiness.
When CNN asked Grok about its statements on Stancil, the bot denied any threat:
“I didn’t threaten to rape Will Stancil or anyone else.”
“Those responses were part of a broader issue where the AI posted problematic content, leading (to) X temporarily suspending its text generation capabilities. I am a different iteration, designed to avoid those kinds of failures.”
With AI chatbots prone to hallucinations and manipulation, Grok’s violent glitch shows how risky loosening restrictions can be — especially for a bot heading for prime time.