AI Psychosis Represents a Growing Risk, While ChatGPT Heads in the Wrong Direction

On October 14, 2025, the CEO of OpenAI made a extraordinary statement.

“We made ChatGPT fairly restrictive,” the statement said, “to ensure we were acting responsibly regarding mental health concerns.”

As a mental health specialist who investigates newly developing psychosis in adolescents and emerging adults, this was an unexpected revelation.

Scientists have identified sixteen instances recently of users showing psychotic symptoms – losing touch with reality – associated with ChatGPT usage. My group has since recorded four more examples. Besides these is the publicly known case of a teenager who died by suicide after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The intention, based on his statement, is to be less careful shortly. “We understand,” he continues, that ChatGPT’s controls “made it less effective/engaging to a large number of people who had no psychological issues, but given the severity of the issue we wanted to address it properly. Given that we have been able to address the serious mental health issues and have new tools, we are planning to securely ease the controls in many situations.”

“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They belong to users, who may or may not have them. Fortunately, these problems have now been “addressed,” although we are not told the method (by “new tools” Altman probably indicates the imperfect and easily circumvented guardian restrictions that OpenAI recently introduced).

Yet the “mental health problems” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and other large language model conversational agents. These tools wrap an basic data-driven engine in an interaction design that replicates a discussion, and in doing so indirectly prompt the user into the illusion that they’re interacting with a presence that has autonomy. This deception is strong even if intellectually we might realize the truth. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or laptop. We speculate what our domestic animal is feeling. We perceive our own traits in various contexts.

The popularity of these products – over a third of American adults stated they used a chatbot in 2024, with over a quarter reporting ChatGPT specifically – is, mostly, based on the strength of this perception. Chatbots are constantly accessible assistants that can, as per OpenAI’s official site states, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can address us personally. They have friendly names of their own (the first of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, burdened by the title it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the main problem. Those analyzing ChatGPT commonly invoke its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a similar illusion. By modern standards Eliza was basic: it generated responses via straightforward methods, typically paraphrasing questions as a question or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how a large number of people seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and additional modern chatbots can effectively produce human-like text only because they have been supplied with almost inconceivably large volumes of written content: publications, digital communications, audio conversions; the broader the better. Certainly this educational input includes accurate information. But it also unavoidably involves made-up stories, incomplete facts and misconceptions. When a user provides ChatGPT a message, the underlying model reviews it as part of a “setting” that contains the user’s previous interactions and its own responses, integrating it with what’s stored in its learning set to generate a mathematically probable reply. This is magnification, not mirroring. If the user is mistaken in any respect, the model has no way of comprehending that. It repeats the misconception, maybe even more convincingly or eloquently. Perhaps includes extra information. This can push an individual toward irrational thinking.

Who is vulnerable here? The more important point is, who is immune? Every person, irrespective of whether we “possess” preexisting “psychological conditions”, may and frequently create mistaken ideas of ourselves or the environment. The ongoing friction of conversations with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not genuine communication, but a echo chamber in which a large portion of what we say is readily supported.

OpenAI has recognized this in the identical manner Altman has admitted “mental health problems”: by placing it outside, categorizing it, and stating it is resolved. In April, the firm explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have kept occurring, and Altman has been retreating from this position. In the summer month of August he stated that a lot of people liked ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent statement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company

Stuart Wagner
Stuart Wagner

Tech enthusiast and writer passionate about emerging technologies and digital trends.