UK Health Service AI System Produced Incorrect Diagnoses for Patient

UK Health Service AI System Produced Incorrect Diagnoses for Patient UK Health Service AI System Produced Incorrect Diagnoses for Patient

Anima Health’s AI tool sparks false diabetes diagnosis in NHS patient

A London man got a diabetic eye screening invite. Problem: he isn’t diabetic and never was. The error came from Anima Health’s AI, Annie, which generated a bogus medical summary in his records.

The patient, healthy and mid-20s, spotted the mistake after a nurse flagged the diagnosis during a routine blood test. The AI summary listed fake symptoms—chest pain and shortness of breath—and a Type 2 diabetes diagnosis with incorrect medication details.

Advertisement

Oddly, the record cited a fake hospital, “Health Hospital,” located at “456 Care Road” in made-up “Health City.” The NHS called it a “one-off case of human error” by a medical summariser who saved the wrong version.

Dr. Matthew Noble of the NHS said:

“Anima is an NHS-approved document management system that assists practice staff in processing incoming documents and actioning any necessary tasks.”

“No documents are ever processed by AI, Anima only suggests codes and a summary to a human reviewer in order to improve safety and efficiency. Each and every document requires review by a human before being actioned and filed.”

But the faulty summary led directly to a diabetic eye screening invite, showing AI’s errors made it into the patient’s care pathway.

An NHS worker told Fortune:

“These human error mistakes are fairly inevitable if you have an AI system producing completely inaccurate summaries.”

“Many elderly or less literate patients may not even know there was an issue.”

Anima’s AI is registered as a low-risk Class I device. But experts warn that tools influencing care decisions may need stricter regulation as Class 2a devices. Anima is reportedly seeking that reclassification.

The incident highlights risks of rolling out AI in healthcare too fast without tight safeguards. NHS England recently warned that unapproved AI, especially Ambient Voice Technology, might breach data rules and harm patients.

Imperial College’s Brendan Delaney said:

“Rather than just simply passively recording, it gives it a medical device purpose.”

“Most of the devices now that were in common use now have a Class One [categorization]. I know at least one, but probably many others are now scrambling to try and start their Class 2a, because they ought to have that.”

The UK government aims to make the NHS the world’s most AI-enabled health system by 2034 but admits challenges remain.

The patient summed it up:

“I think we should be using AI tools to support the NHS. It has massive potential to save money and time. However, LLMs are still really experimental, so they should be used with stringent oversight. I would hate this to be used as an excuse to not pursue innovation but instead should be used to highlight where caution and oversight are needed.”

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement