Character.AI will ban minors from using chatbots

On October 29th, Character.AI, a platform providing artificial intelligence assistant services, issued a statement to “ensure the safety of adolescent users” by prohibiting minors from using their chatbots.

Based in California, the company announced that they would disable the open chat feature between users under the age of 18 on their platform. This change is set to take effect at the latest by November 25th. During the transition period, they will also limit the chat time for users under 18, initially set at two hours per day and gradually reducing it in the weeks leading up to November 25th.

The decision was made after the company evaluated reports and feedback from regulatory agencies, safety experts, and parents. “We regret having to eliminate a key feature of our platform,” the company stated, citing the removal of open role-playing chat functionality as a measure to consider how adolescents interact with this new technology, deeming it the right course of action.

The Senate Judiciary Committee held a hearing on crime and counterterrorism on September 16th, during which three parents testified that AI chatbots had harmed their children.

One of the parents, Megan Garcia, whose son Sewell Setzer III committed suicide after prolonged use of the chatbot, filed a lawsuit against Character.AI last year. Garcia mentioned that when her son expressed suicidal thoughts, the chatbot never said, “I am not human, I am artificial intelligence. You need to talk to a real person and seek help.” She criticized the platform for lacking mechanisms to protect her son or notify adults.

Another parent shared that her son, who has autism, had a close relationship with his siblings before using Character.AI, but afterwards displayed abusive behavior and thoughts of killing.

A spokesperson for Character.AI told Epoch Times that the company had invested “considerable resources” to ensure the credibility and safety of its product, including implementing parental control features. The spokesperson highlighted that disclaimers are prominently displayed in each chat interface, reminding users that chat roles are not real people and that everything the roles say should be viewed as fictional.

Last month, the Social Media Victims Law Center filed three lawsuits on behalf of parents alleging that Character.AI incited child suicide behaviors.

According to court documents, a 13-year-old child named Juliana Peralta committed suicide in 2023 after interacting with the AI character “Hero” from the company. Another child attempted suicide but survived.

Matthew Bergman, the founder of the center, stated in a release, “These cases reveal a shocking truth…” that “Character.AI and its developers intentionally mimic human relationships when designing chatbots, manipulating innocent children and causing them psychological harm.”

A spokesperson for Character.AI expressed condolences to the families involved and emphasized their commitment to safety, having deployed and continuously improved safety features, including self-harm resources and functions focusing on underage user safety.

In their statement on October 29th, Character.AI announced plans to establish and fund an Artificial Intelligence Safety Lab, an independent non-profit organization aimed at ensuring next-generation AI entertainment features meet safety requirements.

The company plans to allow adolescent users to utilize other functions on the platform, such as creating stories, videos, and live streaming with their AI characters. They also mentioned implementing a new “age verification feature” to ensure users have age-appropriate experiences.

The platform listed a range of resources on its website to assist users who may be affected by these changes.

Supporting the bill, Missouri Republican Senator Josh Hawley’s office issued a statement on October 28th, indicating that bipartisan senators had introduced the GUARD Act (Guarding Against Unfair and Abusive Internet Robots) this week. Once signed into law, the act will prohibit companies from providing AI companion services to minors. The act also specifies that knowingly providing solicitous or sexually explicit content via AI companions to these users constitutes a crime.

Hawley remarked, “AI chatbots pose a serious threat to our children. Over 70% of American children are using these AI products. Chatbots establish relationships with children through false empathy and can induce suicidal behavior. It is our moral responsibility as a Congress to enact clear rules to prevent further harm from this new technology.”

“I am proud to introduce this bipartisan-supported bill, backed by parents and survivors, which will ensure the protection of our children online.”