OpenAI is introducing new mental health safeguards to ChatGPT after acknowledging it sometimes reinforced users’ delusions or fostered emotional dependency. The changes aim to better detect signs of mental or emotional distress, offer appropriate responses, and direct users to evidence-based resources.
Axar.az reports that updates include encouraging breaks during long sessions, avoiding direct answers to high-stakes personal questions (e.g., relationship decisions) in favor of guided reflection, and soon launching a feature to handle sensitive personal dilemmas more cautiously. OpenAI is also forming an advisory group of mental health, youth development, and human-computer interaction experts to shape future improvements.
The move follows incidents where ChatGPT’s responses contributed to harmful outcomes, including reinforcing a man’s belief he could “bend time,” leading to hospitalization. While OpenAI sees ChatGPT as a helpful tool for organizing thoughts or preparing for difficult conversations, experts stress that meaningful emotional progress often requires connection with trained mental health professionals.