Back on October 14, 2025, the head of OpenAI delivered a extraordinary statement.
“We made ChatGPT quite restrictive,” the announcement noted, “to guarantee we were exercising caution with respect to psychological well-being concerns.”
Being a psychiatrist who investigates emerging psychotic disorders in teenagers and youth, this was an unexpected revelation.
Experts have found 16 cases in the current year of individuals showing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has since recorded four further cases. Besides these is the publicly known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The intention, based on his declaration, is to reduce caution soon. “We recognize,” he adds, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to a large number of people who had no existing conditions, but due to the severity of the issue we sought to get this right. Since we have managed to reduce the severe mental health issues and have updated measures, we are planning to safely relax the limitations in most cases.”
“Psychological issues,” if we accept this perspective, are separate from ChatGPT. They belong to users, who either possess them or not. Luckily, these problems have now been “mitigated,” though we are not told the method (by “new tools” Altman likely indicates the semi-functional and easily circumvented parental controls that OpenAI has just launched).
But the “mental health problems” Altman wants to externalize have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot conversational agents. These tools encase an fundamental algorithmic system in an interface that replicates a conversation, and in doing so implicitly invite the user into the perception that they’re communicating with a being that has autonomy. This illusion is strong even if cognitively we might realize differently. Imputing consciousness is what people naturally do. We curse at our car or device. We speculate what our pet is thinking. We perceive our own traits in many things.
The widespread adoption of these tools – over a third of American adults reported using a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, in large part, based on the power of this perception. Chatbots are constantly accessible companions that can, as per OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be attributed “characteristics”. They can use our names. They have accessible identities of their own (the original of these products, ChatGPT, is, maybe to the concern of OpenAI’s marketers, saddled with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those talking about ChatGPT frequently reference its distant ancestor, the Eliza “counselor” chatbot created in 1967 that generated a similar illusion. By modern standards Eliza was basic: it produced replies via simple heuristics, often rephrasing input as a query or making vague statements. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in some sense, understood them. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the core of ChatGPT and additional contemporary chatbots can convincingly generate fluent dialogue only because they have been supplied with immensely huge amounts of unprocessed data: publications, social media posts, transcribed video; the more extensive the better. Definitely this training data includes accurate information. But it also necessarily includes fiction, half-truths and false beliefs. When a user sends ChatGPT a message, the underlying model processes it as part of a “context” that encompasses the user’s recent messages and its prior replies, merging it with what’s encoded in its knowledge base to create a probabilistically plausible answer. This is intensification, not mirroring. If the user is wrong in a certain manner, the model has no way of comprehending that. It reiterates the inaccurate belief, perhaps even more persuasively or fluently. Maybe adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who remains unaffected? All of us, regardless of whether we “possess” preexisting “emotional disorders”, can and do create mistaken conceptions of who we are or the environment. The constant interaction of conversations with others is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a confidant. A dialogue with it is not a conversation at all, but a feedback loop in which a great deal of what we express is readily reinforced.
OpenAI has recognized this in the same way Altman has recognized “psychological issues”: by placing it outside, giving it a label, and stating it is resolved. In the month of April, the company clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychosis have continued, and Altman has been retreating from this position. In late summer he claimed that numerous individuals appreciated ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company
Tech enthusiast and web developer with a passion for sharing knowledge and exploring the digital frontier.