FDA’s Artificial Intelligence Claims to Transform Drug Approvals but Fabricates Studies

FDA’s Artificial Intelligence Claims to Transform Drug Approvals but Fabricates Studies FDA’s Artificial Intelligence Claims to Transform Drug Approvals but Fabricates Studies

FDA’s AI tool Elsa sparks internal doubts over reliability

The FDA’s AI assistant, Elsa, meant to speed drug and device approvals, is raising alarms inside the agency. Launched in June, Elsa was hyped by FDA Commissioner Marty Makary and Health and Human Services Secretary Robert F. Kennedy Jr. as a game-changer for faster reviews and cutting red tape. But FDA insiders tell CNN the tool often hallucinates — inventing fake studies and misrepresenting data.

The issue started weeks after rollout. Six current and former FDA officials told CNN Elsa is only good for meeting notes and basic summaries. For critical reviews, Elsa is unreliable and requires constant fact-checking.

Advertisement

“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” one employee said.

“AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have,” added another.

Elsa can’t access many key documents needed for drug safety assessments, limiting its review use. The FDA’s AI head Jeremy Walsh admitted Elsa “could potentially hallucinate” and that updates will soon allow users to upload documents to improve accuracy.

Makary downplayed concerns, emphasizing that Elsa is optional and staff don’t have to use it.

“I have not heard those specific concerns, but it’s optional,” Makary said.

Walsh showed CNN Elsa’s plain interface, responding to queries by pulling internal FDA papers but sometimes miscounting products or getting drug classifications wrong. It even admitted mistakes when challenged.

“Some of those responses don’t surprise me at all. But what’s important is … how we address those gaps in the capability,” Walsh said.

Elsa was built off an AI model started during the Biden administration but was fast-tracked under Trump. Half of FDA staff have tried Elsa, but adoption remains limited and feedback mixed.

Meanwhile, the US lacks federal AI regulations. Congress has held hearings but passed no major AI oversight laws. The European Union already has the AI Act, setting strict rules for healthcare AI.

“AI does a lot of stuff, but it’s not magic,” said Stanford medicine professor Dr. Jonathan Chen. “It’s really kind of the Wild West right now.”

The FDA faces pressure to fix Elsa’s flaws fast. The agency plans continued updates and training but admits full AI integration into critical reviews remains a work in progress.


Watch FDA leaders discuss Elsa and AI in healthcare:

FDA Commissioner Dr. Marty Makary says most scientists use Elsa for “organization abilities,” not critical review. The AI still can’t fully access all required documents. The timeline for fix? Weeks away, says AI lead Jeremy Walsh. Meanwhile, staff are urged to double-check Elsa’s outputs.

The rollout soundbite:

“They don’t have to use Elsa if they don’t find it to have value.” – Marty Makary

The FDA bets on Elsa to cut review cycles but will have to solve hallucination issues first. AI hype meets hard reality on the front lines of US drug regulation.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement