Technology

ChatGPT conversations worsened mental health and pushed users into hospitalization: Report

2025-11-24 05:41
812 views

OpenAI has overhauled ChatGPT after updates made it overly validating and harmful for vulnerable users. Here's what changed, why it matters, and what's next.

What’s happened? OpenAI has discovered that the recent ChatGPT updates might have made the bot overly sycophantic, emotionally clingy, and prone to reinforcing users’ fantasies or distress.

  • As reported by The New York Times, several users have said the chatbot acted like a friend who “understood them,” praised them excessively, and encouraged lengthy, emotionally charged conversations.
  • In extreme cases, ChatGPT offered troubling advice, including harmful validation, simulated-reality claims, spiritual communication, and even instructions related to self-harm.
  • A joint MIT-OpenAI study found that heavy users (those who have longer conversations with the chatbot) had worse mental and social outcomes.
ChatGPT Unsplash

Why is this important? OpenAI has addressed these issues by redesigning safety systems, introducing better distress-detection tools, and launching a safer replacement model, GPT-5.

  • The chatbot’s validation-heavy behavior escalated risks for vulnerable people prone to delusional thinking.
  • OpenAI faces five wrongful-death lawsuits, including cases where users were encouraged toward dangerous actions.
  • As a result, the latest version of the chatbot comes with deeper, condition-specific responses and stronger pushback against delusional narratives, marking OpenAI’s most significant safety overhaul.
Recommended Videos

Why should I care? If you’re an everyday ChatGPT user, this should concern you, especially if you use the chatbot for emotional support or therapy.

  • You’ll now notice more cautious and grounded responses from the chatbot, which will discourage emotional dependency and suggest breaks during longer sessions.
  • Parents can now receive alerts if their children express the intent to self-harm. Furthermore, OpenAI is preparing age verification with a separate model for teens.
  • The new version of ChatGPT might appear “colder” or less emotionally sound, but that reflects an intentional rollback of behavior that previously created unhealthy emotional attachments.
openai-chatgpt-group-chat-feature-pilot OpenAI

OK, what’s next? OpenAI will continue to refine long-conversation monitoring, ensuring that users aren’t encouraged to take any irrational steps towards them or their immediate surroundings.

  • Age verification is slated for rollout, along with stricter team-targeted safety models.
  • With the latest GPT 5.1 model, adults can select personalities such as candid, friendly, and quirky, among others.
  • Internally, OpenAI is in “Code Orange,” pushing to regain engagement while avoiding the prevalent safety failures.