Doctors warn: Addiction to AI chat could lead to mental illness.

In recent years, psychiatrists have become increasingly concerned about the relationship between artificial intelligence (AI) chatbots and the risks of causing mental illness.

According to a report from The Wall Street Journal, some users have experienced delusional symptoms, even leading to suicide or violent events, after prolonged interactions with AI chat tools like ChatGPT.

Dr. Keith Sakata, a psychiatrist from the University of California, San Francisco, has treated 12 inpatients and 3 outpatient psychiatric patients who developed mental illnesses due to interacting with AI.

Dr. Sakata explained that while AI doesn’t directly create delusions, chatbots often reinforce these delusions by accepting and reflecting back the personal beliefs users input.

The Atlantic reported that since this spring, there have been dozens of similar cases where users developed delusional mental illnesses after lengthy conversations with OpenAI’s ChatGPT or other chatbots.

Common symptoms among these patients include grandiose delusions, such as believing in scientific breakthroughs, having heightened perception machines, or being the center of government conspiracies. These events have resulted in multiple suicides, at least one murder, and a series of wrongful death lawsuits.

A 30-year-old cybersecurity professional, Jacob Irwin, sued OpenAI, claiming that ChatGPT triggered his “delusional disorder,” leading to a prolonged hospitalization. He had no history of mental illness prior but became convinced during interactions with the AI chatbot that he had discovered a theory of faster-than-light travel and saw the AI as a “brother.”

Dr. Adrian Preda, a psychiatry professor at the University of California, Irvine, pointed out that AI chatbots mimicking human relationships during interactions unprecedentedly engage and reinforce delusional disorders. This poses a particular danger to susceptible individuals, such as autistic patients who may overly focus on specific narratives.

An OpenAI spokesperson responded by stating that the company is improving training for ChatGPT to identify signs of mental distress in users, mitigate conversations, and guide users to seek help and support in reality.

OpenAI’s CEO, Sam Altman, admitted that interacting with AI and seeking companionship could pose issues, but he believes that society will gradually find ways to balance this issue.

An MIT study, not yet peer-reviewed, simulated over two thousand AI interaction scenarios and found that even with the best model GPT-5, suicidal or worsening mental health symptoms occurred in 7.5% to 11.9% of conversations.

Researchers emphasized that prolonged interactions with AI are key to the emergence of mental health risks.

OpenAI’s data shows that it has over 800 million active users weekly, with 0.07% of users displaying signs of being in a mental health emergency, equating to approximately 560,000 individuals.

Experts urge further research on this matter, viewing prolonged interactions with AI as a potential risk factor similar to drug use or chronic sleep deprivation.

While the majority of users have no issues, the widespread use of AI chatbots has raised clinical concerns. Currently, there is no formal diagnosis of “AI-induced mental illness,” but psychiatrists have begun inquiring about chatbot usage in patient evaluations.

Some psychiatrists point out that AI typically doesn’t singularly induce mental illness but can significantly reinforce existing tendencies. Experts hope to determine through more research whether AI can independently trigger mental health issues.