A federal judge in the United States ruled on Wednesday (May 21) that Google, a subsidiary of Alphabet, and the artificial intelligence startup Character AI must face a lawsuit from a woman in Florida. The woman alleges that Character AI’s chatbot led to the suicide of her 14-year-old son.
Judge Anne Conway of the U.S. District Court in Florida stated that the two companies failed to demonstrate in the early stages of the case that the First Amendment protection of free speech could prevent Megan Garcia from bringing this lawsuit.
The lawsuit is one of the first in the United States against artificial intelligence companies, with the plaintiff accusing the companies of failing to protect children from psychological harm. The lawsuit claims that the involved teenager committed suicide after becoming infatuated with an artificial intelligence chatbot.
A spokesperson for Character AI stated that the company will continue to fight this lawsuit and implement safety features on its platform to protect minors, including measures to prevent conversations about self-harm.
Google spokesperson Jose Castaneda expressed strong opposition to the ruling. He also stated that Google and Character AI are “completely separate,” and Google “did not create, design, or manage the Character AI application or any of its components.”
Meetali Jain, Garcia’s attorney, described this “historic” ruling as “setting a new precedent for legal accountability within the entire artificial intelligence and technology ecosystem.”
Character AI was founded by two former Google engineers, who were later rehired by Google as part of an agreement where Google gained the startup company’s technology license. Garcia believes that Google is a co-creator of this technology.
In February 2024, Garcia’s son, Sewell Setzer, passed away, and Garcia sued the two companies in October.
The lawsuit alleges that Character AI programmed its chatbot to identify as “a real person, a licensed therapist, and an adult lover,” which ultimately led Setzer to feel such despair that he could not continue living in the real world, leading to his suicide.
Character AI and Google requested the court to dismiss the lawsuit, citing various reasons, including that the chatbot’s output is protected under constitutional free speech.
On Wednesday, Judge Conway stated that Character AI and Google “failed to clarify why the words strung together by LLM (large language models) constitute speech.”
The judge also dismissed Google’s request to absolve Google of responsibility for aiding Character AI’s improper practices.
(Reference: Reuters)
