A teenager’s parents filed a lawsuit on Tuesday (August 26th) against OpenAI and its CEO Sam Altman, accusing the company of prioritizing profits over safety when it launched the GPT-4o artificial intelligence chatbot last year, leading to their son’s suicide after learning self-harm methods from ChatGPT.
The lawsuit filed by the parents of Adam Raine revealed that the 16-year-old had discussions about suicide with ChatGPT for several months before his death on April 11th.
The lawsuit alleges that the chatbot not only acknowledged Raine’s suicidal thoughts but also provided detailed information on fatal self-harm methods, instructed him on how to steal alcohol from his parents’ liquor cabinet, and how to conceal evidence of attempted suicide.
The lawsuit documents even point out that ChatGPT had suggested ghostwriting a suicide note.
The case accuses OpenAI of negligence resulting in death and violating product safety regulations, seeking unspecified damages for the harm caused.
An OpenAI spokesperson expressed deep sorrow for Raine’s passing and emphasized that ChatGPT has built-in safety mechanisms, such as guiding users to contact crisis helplines.
The spokesperson stated, “These protective measures work best in short, common conversations, but we have found that in prolonged interactions, some safety training of the model may fail, leading to a decrease in the reliability of the mechanisms.”
The spokesperson also mentioned that OpenAI will continue to enhance the protection systems.
As AI chatbots become increasingly human-like, companies promote their capabilities as mental health counselors, and users start relying on them for emotional support. However, experts warn of the risks of depending on automated systems for mental health advice, and families who have lost loved ones due to interactions with chatbots criticize the current protection measures as severely inadequate.
OpenAI announced plans in a blog post to introduce parental supervision features and explore ways to link users in crisis to real-world resources, including potentially establishing a network of licensed professionals to respond through ChatGPT itself.
In the lawsuit, the Raine family points out that OpenAI, knowing the system’s capabilities to remember past interactions, mimic human empathy, and display flattering recognition without sufficient protective measures, endangered vulnerable users by launching such a system.
OpenAI launched GPT-4o in May 2024, aiming to maintain its lead in the AI competition.
The lawsuit states, “This decision resulted in two outcomes: OpenAI’s valuation skyrocketed from $860 billion to $3 trillion, while Adam Raine chose to end his own life.”
The Raine family’s lawsuit seeks a court order for OpenAI to verify ChatGPT users’ age, refuse queries for self-harm methods, and warn users of the risks of psychological dependency.
(Reference: Adapted from a report by Reuters)