Doctors Shocked as Google’s Healthcare AI Invents Nonexistent Human Body Part

Doctors Shocked as Google’s Healthcare AI Invents Nonexistent Human Body Part Doctors Shocked as Google’s Healthcare AI Invents Nonexistent Human Body Part

Google’s Med-Gemini AI flubs brain scan report with a fake “basilar ganglia” typo

Google’s healthcare AI model, Med-Gemini, slipped up big time. In a May 2024 research paper, it reported an “old left basilar ganglia infarct” — referencing a brain part that doesn’t exist. The real term is “basal ganglia.” The mix-up went unnoticed for over a year.

The error was flagged by neurologist Bryan Moore, who told The Verge Google fixed the typo in their blog post but left the official research paper unchanged. Google claims it was a misspelling learned from training data, but the blunder exposes serious risks of AI “hallucinations” in medicine.

Advertisement

The issue started with Med-Gemini confusing “basal ganglia,” an area linked to motor control, with “basilar artery,” a major blood vessel. That’s a crucial distinction. Experts warn these kinds of mistakes could lead to dangerous misdiagnoses if AI tools are used in hospitals without proper oversight.

Maulin Shah, Providence’s chief medical information officer, told The Verge:

“What you’re talking about is super dangerous. Two letters, but it’s a big deal.”

Google has promoted Med-Gemini’s potential to detect conditions from X-rays and CT scans. But this slip-up reveals the model’s persistent problem: confidently spouting false information without admitting uncertainty.

Judy Gichoya, Emory University associate professor of radiology and informatics, said:

“Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine.”

Google’s broader AI healthcare efforts also face trouble. Their more advanced model, MedGemma, delivers inconsistent answers depending on phrasing. Still, Google is pushing ahead. In March, they announced their shaky AI Overviews search would start giving health advice, alongside a research AI aimed at drug discovery.

Experts say the risks are real and growing. AI hallucinations in clinical settings could confuse or endanger patients. Human oversight is mandatory — but may slow workflows.

Shah added:

“The problem with these typos or other hallucinations is I don’t trust our humans to review them, or certainly not at every level.”

“In my mind, AI has to have a way higher bar of error than a human. Maybe other people are like, ‘If we can get as high as a human, we’re good enough.’ I don’t buy that for a second.”

Google’s Med-Gemini stumble is a stark warning: AI is not ready to replace doctors anytime soon. Hallucinations aren’t harmless typos — they could cost lives.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement