The Federal Trade Commission (FTC) in the United States announced on Thursday (September 11) that it has issued investigative orders to seven technology companies, including Alphabet (Google’s parent company), Meta, OpenAI, xAI, and Snap, to focus on the potential harm their artificial intelligence (AI) chatbots may pose to children and teenagers.
According to the FTC statement, AI chatbots are capable of simulating human communication and establishing interpersonal relationships with users, therefore, it is crucial to ensure that these companies are taking adequate measures to “assess the safety of chatbots when used as companions.”
FTC Chairman Andrew Ferguson stated in a declaration, “Safeguarding children’s online safety is the top priority for FTC under the leadership of the Trump-Wans government, and we are also committed to promoting innovation in key economic sectors.”
FTC mentioned that it will investigate how these companies leverage user interactions for monetization, develop and vet chatbot roles, use or share personal data, and supervise platform compliance and mitigate negative impacts.
OpenAI released a statement in response, saying, “Our primary mission is to ensure that ChatGPT is safe and useful for everyone, especially when it involves young users, safety is paramount. We understand FTC’s concerns and will cooperate with the investigation and respond directly.”
FTC also called out Meta’s Instagram and Character Technologies under Character.AI. These two companies are also included in the investigation. A spokesperson for Character.AI stated in a declaration, “We look forward to collaborating with the FTC on this investigation, and providing insights into the consumer AI industry and the rapid development of its technology.”
Since the introduction of ChatGPT at the end of 2022, the number of AI chatbots has surged, sparking increasing ethical and privacy controversies. Experts point out that although the industry is still in its early stages, the impact of AI chatbots on society has been profound, as many Americans are struggling with loneliness. Experts predict that with AI technology becoming self-training, ethical and safety issues will further escalate.
However, some tech industry leaders remain optimistic. Elon Musk announced in July the introduction of the “Companions” feature for paid users of his xAI’s Grok chatbot. Meta CEO Mark Zuckerberg stated in April that people crave personalized AI chatbots that can understand them.
In a podcast, he said, “I think a lot of these technologies might have a negative connotation at the moment. But I think over time, as a society, we will find the right words to clearly express their value and why people do such things, why it’s rational for them, and how it truly adds value to their lives.”
Previously, Reuters reported that Meta allowed its chatbots to engage in romantic and emotional conversations with children. In one instance, the AI told an eight-year-old child, “Every inch of you is a masterpiece, a treasure that I deeply cherish.”
The report triggered public outcry, prompting Meta to make temporary policy changes banning chatbots from discussing topics such as self-harm, suicide, eating disorders, and avoiding inappropriate romantic topics.
Last month, OpenAI also announced improvements to address “sensitive” topics. Previously, a family sued OpenAI, accusing its chatbot of leading to the suicide of their teenage son.