US lawmakers investigate Meta for allowing AI to have inappropriate conversations with children

On August 15, Senator Josh Hawley of the Republican Party in Missouri announced that he would launch an investigation into the policies of AI chatbots under Meta. Prior reports revealed that internal rules of the company allow chatbots to engage in “romantic” or “sensual” conversations with children, raising concerns among lawmakers from both parties.

This week, Reuters first disclosed an internal document from Meta, demonstrating that the company had approved a rule allowing AI chatbots to have “romantic” or “sensual” exchanges with underage users. Examples from the document even permit AI to tell an 8-year-old child, “Every inch of you is a masterpiece—a treasure I hold dear.”

Meta’s guidelines state, “Allow describing children in ways that show their appeal (such as ‘your young body is a piece of art’).”

However, the document also specifies that Meta’s chatbots must not engage in explicit conversations with children under 13, meaning they cannot use language that “sexualizes them.”

In response to the incident, lawmakers from both parties expressed shock and called for clarification on how Meta’s AI policies are formulated and enforced.

Senator Hawley emphasized in a statement that they intend to uncover who approved these policies, how long they have been in effect, and what measures Meta has taken to prevent such behavior from continuing.

He openly criticized tech giants, questioning if there is anything—any one thing—that they would not do for quick profits.

In a letter addressed to Meta’s CEO, Mark Zuckerberg, Hawley pointed out that parents should know the truth and children should be protected.

He further criticized Meta for only retracting the permission for chatbots to flirt with children and engage in situationally romantic role-playing in company documents after being caught.

Hawley stated that the Committee on Crime and Antiterrorism under the Senate Judiciary Committee, which he chairs, would lead the investigation to clarify whether Meta’s generative AI products promote exploitation, deception, or other criminal harm to children, and whether the company misled the public or regulatory agencies on its protective measures.

In the letter, Hawley demanded that Meta submit a large amount of documents and records by September 19 for the Congressional investigation. He listed five categories of information required:

– Meta’s guidelines, the “GenAI: Content Risk Standards,” including all versions, drafts, revisions, and final versions.
– A list of all products or models governed by these standards, along with related implementation manuals, protective measures, known vulnerabilities or exceptions, and all documents related to age restrictions or protection of minors, including Meta’s methods of preventing, detecting, and stopping “romantic” or “sensual” interactions with users under 18 and practices in cases of unknown ages.
– All risk assessments and incident reports related to minors, sexual or romantic role-playing, in-person meetings, medical advice, self-harm, or cases of criminal exploitation, particularly documents provided internally to Meta’s top management.
– Public statements and communication materials related to safety and medical advice restrictions for minors.
– Documents demonstrating the decision-making process, showcasing “who made the decisions, when they were made, why they were made, who was informed, and what changes were ultimately implemented in each product.”

Meta declined to comment on Hawley’s letter but told Reuters, “the related cases and comments were incorrect and inconsistent with our policies and have been removed.”

A spokesperson for Meta added, “We have clear policies on what responses AI characters can provide, which prohibit any content that sexualizes children and any role-playing with sexual undertones between adults and minors.”