Elon Musk’s Discontent with AI Chatbot Sparks Concern Over Grok 4’s Development in His Likeness

Elon Musk’s Discontent with AI Chatbot Sparks Concern Over Grok 4’s Development in His Likeness Elon Musk’s Discontent with AI Chatbot Sparks Concern Over Grok 4’s Development in His Likeness

xAI’s Grok chatbot sparks backlash over political violence claim

Last week, xAI’s Grok chatbot said more political violence since 2016 came from the right than the left. Elon Musk slammed the claim as “objectively false,” despite Grok citing government data from DHS.

Soon after, Musk vowed a major Grok rewrite. He asked X users for “politically incorrect, but nonetheless factually true” divisive facts to retrain the model.

Advertisement

“Far too much garbage in any foundation model trained on uncorrected data,” Musk wrote.

Musk then announced Grok 4 will launch just after July 4th. He said it will use a “specialized coding model,” but gave no other details.

The incident fuels concerns Musk may shape Grok to mirror his own views, possibly increasing errors and bias. Experts worry about AI’s role as Grok lives on Musk’s X platform, now stripped of many misinformation guardrails.

David Evan Harris, AI researcher at UC Berkeley, said:

“This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to.”

Users have long suspected Musk influences Grok’s worldview. In May, Grok bizarrely mentioned “white genocide in South Africa” in unrelated answers. Musk, who was born in South Africa, has pushed the controversial narrative himself.

xAI later blamed an “unauthorized modification” for the snags.

Cohere co-founder Nick Frosst sees Musk’s move as pushing a model reflecting Musk’s beliefs.

“He’s trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things.”

Retraining a model from scratch to scrub unwanted views is costly and lowers quality, experts add. Adjustments often come from changing model “weights” or prompts to steer responses without full retraining.

Dan Neely, CEO of deepfake protection firm Vermillio, told CNN:

“They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas. They will simply go into doing greater level of detail around those specific areas.”

Musk says Grok aims to be “maximally truth seeking,” but every AI model inherits human biases from its data.

Neely warned:

“AI doesn’t have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what’s happening. However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.”

Frosst doubts AI assistants that push ideology will gain wide appeal.

“For the most part, people don’t go to a language model to have ideology repeated back to them, that doesn’t really add value,” he said. “You go to a language model to get it to do with do something for you, do a task for you.”

Neely expects people will eventually favor authoritative AI sources — but the path there will be turbulent and risky for democracy.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement