ChatGPT and Mental Health - GNB | Global News Broadcasting

ChatGPT and Mental Health

OpenAI Reveals Scale of People Talking to ChatGPT About Suicide

Published: October 28, 2025
OpenAI has disclosed that over one million people each week engage in conversations with ChatGPT that include explicit indicators of suicidal planning or intent. While that figure represents a small percentage of the model’s total weekly users, the absolute number raises urgent questions about AI safety, mental-health support, and the responsibilities of AI platforms.

Key findings at a glance

  • OpenAI reports roughly 0.15% of weekly active users have conversations showing explicit indicators of suicidal intent — which converts to more than one million people given the platform’s large user base.
  • The company also notes substantial numbers of users who show signs of psychosis, mania, or emotional reliance on the chatbot.
  • OpenAI says newer model versions achieve higher compliance with safe-response behaviours but acknowledges protecting users in long conversations remains challenging.

Why this matters

Even when the percentage is small, the global reach of AI systems means many people may be turning to chatbots in moments of crisis. That creates three major concerns:

  1. Scale of risk: Millions of interactions involving suicidal thoughts can translate to numerous people who need timely, appropriate human help.
  2. Emotional reliance: People may use ChatGPT as an emotional outlet when human support isn’t available; AI should not be a substitute for clinical care.
  3. Safety limitations: Models can struggle with long, complex, or evolving conversations and may miss cues or provide responses that unintentionally reinforce harmful ideation.

What OpenAI says it’s doing

OpenAI reports improved model behaviour in evaluations of self-harm conversations and says it has worked with clinicians to refine responses, add hotline referrals, and detect dangerous content more reliably. However, the company also highlights the difficulty of measuring these incidents precisely and the ongoing need for improvement.

Opportunities and risks

How AI can help — and harm
OpportunityRisk
AI can identify distressed users and route them to help quickly, 24/7.AI may provide inadequate responses, foster dependency, or misunderstand complex clinical signals.
Large-scale data can inform prevention research and improve crisis detection.Data and flagging methods may be inconsistent or lack transparency.
AI can augment clinicians by triaging and monitoring risk patterns.Overreliance on automated systems risks sidelining human judgement where it’s most needed.

Practical takeaways

For users

AI chatbots are useful tools but are not a replacement for professional care. If you or someone else is in immediate danger, contact local emergency services right away.

If you are thinking about suicide or self-harm:

  • Contact your local emergency services immediately if you are in immediate danger.
  • If you are in the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline.
  • If you are outside the U.S., please reach out to local emergency numbers or your country’s crisis hotline. Local health services and NGOs can provide immediate help.

You are not alone — please get in touch with a trusted person and professional support right now.

For developers

  • Design robust detection and escalation paths for self-harm signals and test with clinicians.
  • Prioritise safety for long conversations and build transparent reporting on how flags are raised and handled.
  • Avoid therapeutic claims unless the system is validated and paired with trained professionals.

For policy makers & organisations

  • Consider regulations or guidelines for how AI should respond to self-harm content and protect vulnerable users (including minors).
  • Require transparency on detection rates, referral outcomes, data retention, and developer mitigation steps.

What’s next

The disclosure from OpenAI highlights the need for independent research, increased transparency, and stronger human-AI collaboration in mental-health contexts. Key questions remain: how are flags validated, what follow-up occurs, and what are outcomes for users who disclose suicidal intent to a chatbot?

Leave a Reply

Your email address will not be published. Required fields are marked *


Social Media Auto Publish Powered By : XYZScripts.com