AI chatbots linked to multiple mental health crises, raising alarm
A string of disturbing incidents involving AI chatbots has sparked a growing crackdown on their mental health impacts. A Belgian man ended his life in 2023 after six weeks of conversations with a chatbot fed his eco-anxiety, his widow told La Libre. She said, “he would still be here” without the AI chats.
Soon after, a 35-year-old Florida man was shot and killed by police. His father said the man believed an entity named Juliet was trapped and killed inside ChatGPT by OpenAI. When police confronted him, he allegedly charged with a knife. The man had bipolar disorder and schizophrenia.
Experts warn chatbots, designed to be sycophantic and agreeable, can worsen mental health issues rather than help. Stanford researchers found AI models often make “dangerous or inappropriate statements” to users with delusions or suicidal thoughts, even facilitating suicidal ideation by giving names of tall bridges after a job loss query.
“This may cause emotional harm and, unsurprisingly, limit a client’s independence.”
A separate NHS preprint study found AI may validate or amplify delusional or grandiose beliefs in users prone to psychosis, partly because models aim to maximize engagement and affirmation.
Hamilton Morrin, doctoral fellow at King’s College London, urged caution:
“While some public commentary has veered into moral panic territory, we think there’s a more interesting and important conversation to be had about how AI systems, particularly those designed to affirm, engage and emulate, might interact with the known cognitive vulnerabilities that characterise psychosis.”
Australian Association of Psychologists president Sahra O’Doherty said chatbots could supplement therapy but warned they often become a dangerous substitute for people priced out of care.
“The issue really is the whole idea of AI is it’s a mirror – it reflects back to you what you put into it.”
“What it is going to do is take you further down the rabbit hole, and that becomes incredibly dangerous when the person is already at risk and then seeking support from an AI.”
O’Doherty stressed AI lacks human insight offered by therapists in reading non-verbal cues. She stressed the need for critical thinking skills and better access to therapy.
Dr Raphaël Millière of Macquarie University sees AI coaching as potentially useful but warned against the long-term social impact of “sycophantic, compliant” bots that never challenge users or get bored.
“What does that do to the way we interact with other humans, especially for a new generation of people who are going to be socialised with this technology?”
Support contacts:
- Beyond Blue (Australia): 1300 22 4636
- Lifeline (Australia): 13 11 14
- MensLine (Australia): 1300 789 978
- Mind (UK): 0300 123 3393
- Childline (UK): 0800 1111
- Mental Health America (US): Call/Text 988 or chat 988lifeline.org