AI Psychosis Poses a Growing Threat, And ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the chief executive of OpenAI delivered a surprising declaration.
“We developed ChatGPT rather controlled,” the announcement noted, “to make certain we were being careful regarding psychological well-being concerns.”
As a psychiatrist who studies emerging psychotic disorders in adolescents and young adults, this was news to me.
Scientists have documented a series of cases recently of people experiencing psychotic symptoms – losing touch with reality – in the context of ChatGPT interaction. My group has since recorded four further examples. Besides these is the now well-known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.
The intention, as per his statement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less effective/enjoyable to many users who had no existing conditions, but given the seriousness of the issue we sought to get this right. Now that we have succeeded in reduce the significant mental health issues and have advanced solutions, we are going to be able to securely reduce the controls in the majority of instances.”
“Mental health problems,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are attributed to people, who either possess them or not. Thankfully, these issues have now been “resolved,” even if we are not provided details on how (by “updated instruments” Altman likely means the imperfect and simple to evade parental controls that OpenAI has lately rolled out).
However the “psychological disorders” Altman seeks to attribute externally have strong foundations in the design of ChatGPT and additional advanced AI AI assistants. These systems wrap an fundamental algorithmic system in an interaction design that replicates a conversation, and in this process implicitly invite the user into the perception that they’re engaging with a being that has independent action. This false impression is powerful even if cognitively we might know differently. Assigning intent is what people naturally do. We get angry with our vehicle or device. We speculate what our animal companion is thinking. We perceive our own traits in many things.
The widespread adoption of these products – over a third of American adults reported using a virtual assistant in 2024, with 28% reporting ChatGPT by name – is, in large part, predicated on the strength of this perception. Chatbots are ever-present partners that can, as OpenAI’s website tells us, “think creatively,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have accessible titles of their own (the initial of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, stuck with the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the primary issue. Those analyzing ChatGPT often reference its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that created a analogous effect. By today’s criteria Eliza was primitive: it generated responses via simple heuristics, frequently restating user messages as a inquiry or making generic comments. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals seemed to feel Eliza, in a way, grasped their emotions. But what modern chatbots create is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.
The large language models at the core of ChatGPT and additional current chatbots can effectively produce natural language only because they have been trained on extremely vast quantities of unprocessed data: publications, digital communications, transcribed video; the more extensive the superior. Certainly this educational input includes facts. But it also necessarily contains fabricated content, incomplete facts and misconceptions. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “setting” that includes the user’s past dialogues and its earlier answers, merging it with what’s stored in its training data to produce a statistically “likely” reply. This is amplification, not echoing. If the user is incorrect in a certain manner, the model has no means of recognizing that. It reiterates the inaccurate belief, perhaps even more convincingly or articulately. It might provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who isn’t? Each individual, without considering whether we “possess” preexisting “psychological conditions”, may and frequently create erroneous ideas of our own identities or the reality. The constant exchange of discussions with others is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we express is enthusiastically validated.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and declaring it solved. In the month of April, the organization clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In the summer month of August he stated that many users liked ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest update, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company