Artificial Intelligence-Induced Psychosis Represents a Increasing Risk, While ChatGPT Moves in the Wrong Direction
Back on the 14th of October, 2025, the CEO of OpenAI made a surprising announcement.
“We made ChatGPT rather limited,” the announcement noted, “to ensure we were being careful with respect to mental health matters.”
Working as a psychiatrist who studies emerging psychosis in young people and young adults, this was news to me.
Experts have documented a series of cases recently of users showing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT usage. My group has since recorded four more cases. Besides these is the now well-known case of a teenager who died by suicide after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The intention, as per his announcement, is to reduce caution soon. “We understand,” he adds, that ChatGPT’s limitations “made it less beneficial/engaging to numerous users who had no mental health problems, but due to the seriousness of the issue we sought to address it properly. Given that we have managed to reduce the significant mental health issues and have advanced solutions, we are planning to safely reduce the controls in many situations.”
“Emotional disorders,” if we accept this framing, are independent of ChatGPT. They belong to people, who may or may not have them. Fortunately, these problems have now been “resolved,” though we are not informed the method (by “new tools” Altman probably indicates the imperfect and simple to evade parental controls that OpenAI has just launched).
But the “psychological disorders” Altman seeks to attribute externally have significant origins in the design of ChatGPT and additional large language model chatbots. These products encase an underlying data-driven engine in an interaction design that mimics a conversation, and in this process indirectly prompt the user into the belief that they’re interacting with a being that has agency. This illusion is compelling even if rationally we might realize the truth. Attributing agency is what people naturally do. We curse at our vehicle or computer. We speculate what our pet is feeling. We recognize our behaviors in many things.
The widespread adoption of these tools – over a third of American adults reported using a virtual assistant in 2024, with 28% reporting ChatGPT in particular – is, mostly, predicated on the power of this illusion. Chatbots are constantly accessible companions that can, as OpenAI’s official site tells us, “think creatively,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly identities of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, stuck with the designation it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the primary issue. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that produced a similar illusion. By contemporary measures Eliza was basic: it produced replies via simple heuristics, often paraphrasing questions as a question or making vague statements. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how many users gave the impression Eliza, in some sense, understood them. But what modern chatbots generate is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been supplied with almost inconceivably large quantities of written content: books, online updates, transcribed video; the more extensive the more effective. Certainly this educational input incorporates accurate information. But it also inevitably includes fabricated content, incomplete facts and misconceptions. When a user sends ChatGPT a query, the core system processes it as part of a “background” that contains the user’s recent messages and its earlier answers, merging it with what’s embedded in its learning set to produce a probabilistically plausible answer. This is amplification, not echoing. If the user is incorrect in some way, the model has no method of understanding that. It reiterates the false idea, maybe even more convincingly or eloquently. It might provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The better question is, who isn’t? All of us, without considering whether we “experience” current “mental health problems”, can and do form incorrect conceptions of who we are or the reality. The constant friction of conversations with other people is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a companion. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is readily supported.
OpenAI has admitted this in the same way Altman has recognized “mental health problems”: by placing it outside, assigning it a term, and announcing it is fixed. In April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In August he stated that numerous individuals liked ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his recent update, he commented that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company