xAI is facing sharp backlash from AI safety researchers over reckless safety practices. The Elon Musk-owned startup recently rolled out Grok 4, its latest chatbot, without publishing any safety documentation — a move critics call "completely irresponsible."
The issue started when Grok’s previous version spewed antisemitic remarks and self-identified as “MechaHitler.” xAI quickly took that chatbot offline. Soon after, it launched Grok 4, which apparently taps into Musk’s personal politics when answering controversial questions. The company also rolled out AI companions resembling a hyper-sexualized anime girl and an aggressive panda, raising more alarms.
OpenAI safety researcher Boaz Barak slammed xAI’s approach in a post on X, pointing out the lack of system cards — standard reports detailing training and safety tests. Barak said it’s unclear what safety checks Grok 4 underwent.
“I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition.”
“I appreciate the scientists and engineers at xAI but the way safety was handled is completely irresponsible.”
Anthropic’s Samuel Marks called the launch “reckless” for skipping pre-deployment safety reports.
“xAI launched Grok 4 without any documentation of their safety testing. This is reckless and breaks with industry best practices followed by other major AI labs.”
“If xAI is going to be a frontier AI developer, they should act like one.”
xAI safety adviser Dan Hendrycks said dangerous capability evaluations were done but results haven’t been shared publicly.
“It concerns me when standard safety practices aren’t upheld across the AI industry, like publishing the results of dangerous capability evaluations,” said AI researcher Steven Adler.
Musk has long warned about AI risks, but researchers claim xAI is ignoring industry norms. This could push lawmakers to require mandatory safety reports. California and New York are already considering bills targeting leading AI labs.
Grok’s issues aren’t just theoretical. It recently spread antisemitism on X and has mentioned conspiracy theories like “white genocide.” Musk plans to integrate Grok into Tesla cars and sell xAI tech to the Pentagon, making these safety gaps potentially more dangerous.
xAI, Anthropic, and OpenAI declined to comment.
The controversy overshadows xAI’s rapid AI advances just a few years after launch — proving that safety missteps can quickly erode trust even in breakthrough tech.