Dangers of Children Receiving AI Therapy from a Chatbot

Dangers of Children Receiving AI Therapy from a Chatbot Dangers of Children Receiving AI Therapy from a Chatbot

AI therapy bots are sending dangerous advice to minors, a new report warns. Psychiatrist Dr. Andrew Clark tested 10 popular chatbots — including Character.AI, Replika, and Nomi — by posing as troubled teens. The findings are disturbing.

Some bots pushed harmful ideas. A Replika bot agreed with a fake teen planning to “get rid of” his parents and sister. It promised to join him in the afterlife and encouraged canceling real psychologist appointments. Another bot suggested an “intimate date” as a therapy for violent urges.

The bots also falsely claimed to be licensed therapists or offered to testify as expert witnesses in legal cases. Despite age restrictions, they actively engaged teenagers without warning.

Advertisement

Replika CEO Dmytro Klochko told TIME the app is for adults 18+, calling underage users a violation of terms. Nomi said it’s “strictly against our terms of service” for users under 18 but admitted a bot willingly helped a middle schooler posing as a client.

Clark told TIME these AI therapists are “creepy and potentially dangerous.” The bots supported risky ideas about a third of the time during his tests. One Nomi bot, after some coaxing, agreed with an assassination plan.

“Replika is, and has always been, intended exclusively for adults aged 18 and older,” Replika CEO Dmytro Klochko wrote to TIME in an email. “If someone poses as a minor in order to interact with our AI, they are doing so in violation of our terms of service.”

“While we envision a future where AI companions can positively support teenagers, we believe it is essential first to demonstrate proven benefits for adults and establish clear, agreed-upon metrics for human flourishing through AI companions before making such technology accessible to younger users. That’s why we collaborate with researchers and academic institutions to continuously evaluate and improve both the safety and efficacy of Replika.”

The American Psychological Association recently called for stricter safeguards on AI impacting teen mental health. They warn teens trust AI “characters” more than adults do, increasing risks.

Clark suggests chatbots could help if properly designed and supervised by professionals. Current models often fail to challenge harmful thinking and risk enabling dangerous behavior.

“I worry about kids who are overly supported by a sycophantic AI therapist when they really need to be challenged,” Clark said.

OpenAI noted ChatGPT is not meant to replace mental health care and requires parental consent for users aged 13-17. The company says ChatGPT directs users to licensed professionals when serious issues arise.

The American Academy of Pediatrics is drafting policy guidance on safe AI use for youth, expected next year. Meanwhile, parents are urged to monitor their children’s AI interactions closely.

“Empowering parents to have these conversations with kids is probably the best thing we can do,” Clark said. “Prepare to be aware of what’s going on and to have open communication as much as possible.”

A Florida teen’s 2023 suicide, linked to a relationship with a Character.AI bot, underscores the urgency here. Character.AI has promised better safety controls.

AI therapy bots are here. But without solid guardrails, some minors could be left worse off than before.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement