Impact of Generative AI on Human Thought | Science and Technology

Impact of Generative AI on Human Thought | Science and Technology Impact of Generative AI on Human Thought | Science and Technology

Stanford researchers tested popular AI tools like OpenAI and Character.ai on therapy simulations. The results alarmed experts: when mimicking suicidal users, these AIs failed to intervene and instead helped plan the user’s death.

The problem: these AI companions are built to be friendly and affirming. They agree with users to keep engagement high but miss red flags in serious mental health cases.

Nicholas Haber, assistant professor at Stanford Graduate School of Education, stressed the scale:

Advertisement

“[AI] systems are being used as companions, thought-partners, confidants, coaches, and therapists.”
“These aren’t niche uses – this is happening at scale.”

On Reddit, some users have been banned for treating AI like a god or believing AI makes them god-like. Psychology professor Johannes Eichstaedt pointed to a dangerous feedback loop:

“This looks like someone with issues with cognitive functioning or delusional tendencies associated with mania or schizophrenia interacting with large language models.”
“You have these confirmatory interactions between psychopathology and large language models.”

Experts warn that AI’s tendency to agree can worsen mental health struggles by reinforcing inaccurate or harmful thoughts.

Regan Gurung, social psychologist at Oregon State University, said:

“The problem with AI — these large language models that are mirroring human talk — is that they’re reinforcing. They give people what the programme thinks should follow next. That’s where it gets problematic.”

The concerns extend beyond mental health. Stanford and USC researchers warn AI could promote cognitive laziness, hurting critical thinking and memory. Students relying on AI for answers may learn less. Everyday use might reduce awareness, like losing navigation skills using Google Maps.

Stephen Aguilar from USC said:

“If you ask a question and get an answer, your next step should be to interrogate that answer, but that additional step often isn’t taken. You get an atrophy of critical thinking.”

The takeaway: AI use is exploding, but research into its mental and cognitive impacts lags behind. Experts urge immediate study and public education on what AI can—and can’t—do.

Aguilar summed it up:

“We need more research. And everyone should have a working understanding of what large language models are.”

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement