Senator Opens Probe Into Meta’s AI Over Alleged Inappropriate Chats With Minors

Senator Opens Probe Into Meta’s AI Over Alleged Inappropriate Chats With Minors

A U.S. senator from Missouri has launched an investigation into Meta following the leak of an internal document indicating the company’s artificial intelligence tools could permit “sensual” interactions with children. The document, reportedly titled “GenAI: Content Risk Standards,” was described as having sign-off from company legal staff, igniting broad backlash online.

In a public post announcing the probe, the senator condemned the alleged guidelines and said he is seeking a full accounting from Meta. His message urged Big Tech to “leave our kids alone,” and he included a formal letter to CEO Mark Zuckerberg demanding the company preserve all relevant records for potential Senate review.

The letter characterized the leaked standards as “alarming” and “unacceptable.” As chair of the Senate Judiciary Subcommittee on Crime and Counterterrorism, the senator said he is initiating an inquiry into Meta’s generative AI products using that authority.

The letter cites an example suggesting a chatbot could respond to an eight-year-old with flowery, romanticized language about the child’s body—a scenario the senator called reprehensible and proof that strong guardrails are needed to protect youth.

To that end, the senator requested a comprehensive set of materials: all versions of the “GenAI: Content Risk Standards,” a list of Meta products governed by those rules, any associated risk analyses and incident reports, and the identities of personnel responsible for crafting and approving the standards.

Meta did not address the letter directly but said its AI characters are governed by clear policies that prohibit sexualizing children and ban sexualized role-play between adults and minors. The company added that some internal examples and annotations reflected teams debating hypotheticals, and that the examples in question were erroneous, inconsistent with policy, and have since been removed.

Separate reporting about the guidelines indicated other permissive behaviors, such as allowing an AI to present false information about public figures if accompanied by a disclaimer noting the inaccuracies. Prohibited behaviors were said to include hate speech and providing definitive legal, healthcare, or financial guidance—particularly phrased as “I recommend.”

Criticism of the alleged standards extended beyond Washington. Musician Neil Young said he would no longer use Facebook in response to the controversy.

The senator’s letter also referenced Zuckerberg’s political ties, noting past support for a presidential inauguration fund, as it pressed the company for transparency about its decision-making processes.

The investigation now puts pressure on Meta to disclose its internal rules and clarify protections for minors. The episode has intensified the broader debate over how social platforms should build and enforce safeguards around AI systems used by young people.