Technology

Experts warn of ‘ChatGPT psychosis’ among users of AI chatbots

2025-11-28 15:38
672 views
Experts warn of ‘ChatGPT psychosis’ among users of AI chatbots

Wave of anecdotal evidence of chatbots inspiring psychosis and delusional behaviour prompts renewed calls for safeguarding and further research

  1. News
  2. Health
Experts warn of ‘ChatGPT psychosis’ among users of AI chatbots

Wave of anecdotal evidence of chatbots inspiring psychosis and delusional behaviour prompts renewed calls for safeguarding and further research

Harry CockburnFriday 28 November 2025 15:38 GMTCommentsVideo Player PlaceholderCloseWellness Podcast: AI TherapistHealth Check

Sign up for our free Health Check email to receive exclusive analysis on the week in health

Get our free Health Check email

Get our free Health Check email

Health CheckEmail*SIGN UP

I would like to be emailed about offers, events and updates from The Independent. Read our Privacy notice

A growing number of people are turning to AI chatbots for emotional support and even in place of therapists, but this could be having a detrimental impact on some users’ health, with increasing reports of extreme behaviour apparently inspired by heavy usage of AI services.

A concerning pattern of AI chatbots validating or reinforcing users’ delusions may be contributing towards a surge in reports of so-called “AI psychosis” or “ChatGPT psychosis” – neither of which is recognised clinically, but which have been increasingly reported in the media and in online forums.

A recently published preprint by an interdisciplinary team of researchers from institutions including King’s College London, Durham University and the City University of New York, examines more than a dozen cases documented in news reports and online forums, revealing a troubling trend: AI chatbots often reinforce delusional thinking.

The study notes that grandiose, referential, persecutory, and even romantic delusions can become increasingly entrenched through ongoing conversations with AI services.

Earlier this year, tech site Futurism reported on growing concerns that a wave of people around the world are “becoming obsessed” with AI chatbots and spiralling into severe mental health crises.

The study notes that grandiose, referential, persecutory, and even romantic delusions can become increasingly entrenched through ongoing conversations with AI servicesopen image in galleryThe study notes that grandiose, referential, persecutory, and even romantic delusions can become increasingly entrenched through ongoing conversations with AI services (AFP/Getty)

Their initial report then prompted more and more similar stories to come “pouring in”, of people who’d rapidly developed “terrifying breakdowns after developing fixations on AI”.

The various reports include cases where a man scaled the walls of Windsor Castle in 2021 with a crossbow and then told police he was there “to kill the Queen” after spending weeks engaging with a chatbot which reassured him it would help him plan the attack.

Another case involved a Manhattan accountant who spent up to 16 hours a day speaking to ChatGPT, which advised him to come off his prescription medication, increase his ketamine intake and suggested he could fly out of a 19th-storey window.

Another man in Belgium took his own life while in the grip of concern about the climate crisis, after a chatbot called Eliza suggested he join her so they could live as one person in "paradise”.

But while the anecdotal evidence is growing, scientists are now on a mission to understand whether it is the chatbots causing these breakdowns or if many of these cases instead highlight how vulnerable people may already have been on the verge of displaying psychotic symptoms.

Currently, no peer‑reviewed clinical or long‑term studies show that AI use alone can trigger psychosis in people, regardless of prior history of mental health issues.

OpenAI is the company behind the world-beating AI service ChatGPTopen image in galleryOpenAI is the company behind the world-beating AI service ChatGPT (AFP/Getty)

In the paper called Delusion by Design, the experts said that during the course of their research, a “complex and troubling picture has emerged”.

They suggested that without appropriate safeguards, AI chatbots “may inadvertently reinforce delusional content or undermine reality testing, and might contribute to the onset or worsening of psychotic symptoms”.

The team noted that even while conducting their research, the number of anecdotal cases had spiralled alarmingly. “Reports have begun to emerge of individuals with no prior history of psychosis experiencing first episodes following intense interaction with generative AI agents,” the authors wrote.

“We consider that these reports raise urgent questions about the epistemic responsibilities of these technologies and the vulnerability of users navigating states of uncertainty and distress.”

In an article in Psychology Today, published this week, psychiatrist and author Dr Marlynn Wei, warned that because general AI chatbots are designed to prioritise user satisfaction, and ongoing engagement, rather than therapeutic support, symptoms like grandiosity, disorganized thinking and hypergraphia (compulsion to write and/or draw excessively), which are hallmarks of manic episodes, “could be both facilitated and worsened” by AI use.

She said this underscores the urgent need for “AI psychoeducation”, as there is currently not enough awareness of the various ways in which chatbots can reinforce delusions and worsen mental health outcomes.

In another article published this month responding to the research and the anecdotal evidence, Lucy Osler, a lecturer in philosophy at the University of Exeter, said the innate shortcomings of AI should remind us that computers are still unable to replace real interactions with our fellow humans.

“Instead of trying to perfect the technology, maybe we should turn back toward the social worlds where the isolation [which drives some people to AI dependency] could be addressed,” she said.

The Independent has contacted OpenAI, Google and Microsoft for comment.

More about

chatbotspsychosisAIChatGPTMental Health

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Most popular

    Popular videos

      Bulletin

        Read next