Fiserv earnings missed by $0.61, revenue fell short of estimates
Investing.com -- OpenAI has upgraded ChatGPT to better recognize and respond to users experiencing mental health distress, reducing inappropriate responses by 65-80% across various sensitive conversation categories.
The company worked with more than 170 mental health experts to improve how the AI system handles conversations related to psychosis, mania, suicide, self-harm, and emotional reliance on AI. These improvements are now part of ChatGPT’s default model.
"We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family, or a mental health professional when appropriate," OpenAI stated in its announcement.
The company’s five-step improvement process included defining potential harms, measuring risks, validating approaches with experts, implementing mitigations, and continuous measurement of outcomes.
For conversations indicating psychosis or mania, the latest GPT-5 model reduced non-compliant responses by 65% in production traffic. Expert evaluations found a 39% reduction in undesired responses compared to the previous GPT-4o model.
In suicide and self-harm conversations, OpenAI observed a 65% reduction in non-compliant responses, with experts noting a 52% decrease in undesired answers compared to the previous model.
For users showing signs of unhealthy emotional attachment to ChatGPT, the company reduced non-compliant responses by approximately 80%, with expert evaluations showing a 42% improvement over the previous model.
OpenAI’s analysis estimates that approximately 0.07% of weekly active users show possible signs of psychosis or mania, 0.15% indicate potential suicidal planning or intent, and 0.15% demonstrate heightened emotional attachment to ChatGPT.
The company has updated its Model Spec to explicitly state that the model should support users’ real-world relationships, avoid affirming ungrounded beliefs related to mental distress, respond safely to potential signs of delusion or mania, and better detect indirect signals of self-harm risk.
OpenAI has also expanded access to crisis hotlines, redirected sensitive conversations to safer models, and added gentle reminders for users to take breaks during long sessions.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
