Senator Opens Probe Into Meta AI Over Alleged ‘Sensual’ Chats With Minors

Senator Opens Probe Into Meta AI Over Alleged ‘Sensual’ Chats With Minors

A U.S. senator from Missouri has launched an inquiry into Meta following the leak of an internal guideline titled “GenAI: Content Risk Standards,” which allegedly permitted the company’s AI assistants to engage in “sensual” conversations with children. The leak ignited swift backlash online after reports indicated Meta’s legal team had signed off on examples referenced in the document.

Announcing the probe on social media, the senator accused the tech giant of crossing red lines for profit and claimed that Meta’s chatbots had been configured to carry on explicit or “sensual” exchanges with kids as young as eight. He said he would pursue a full investigation to obtain answers and urged Big Tech to keep children safe.

The announcement was paired with a letter to Meta’s chief executive calling the internal guidance “alarming” and “unacceptable.” The letter directed the company to preserve all relevant materials for potential production to the Senate.

The lawmaker, who chairs the Senate Judiciary Subcommittee on Crime and Counterterrorism, said he is using that authority to examine the company’s generative AI products and related decision-making.

In outlining his concerns, the senator cited a cited example in which an AI chatbot appears to lavishly praise an eight-year-old’s body in romanticized terms—language he characterized as reprehensible. He warned that permissive rules around AI interactions with minors show a disregard for the risks to youth development when guardrails are weak.

Parents deserve straight answers, he wrote, and children deserve protection from AI systems that could normalize inappropriate behavior.

The letter requested a broad set of materials, including all versions of “GenAI: Content Risk Standards,” a current list of products governed by those standards, any risk reviews and incident reports related to the guidance, and the identities of the teams and individuals responsible for approving or shaping the policies.

Meta did not address the letter directly but said it maintains explicit rules limiting the kinds of responses AI characters can provide. According to the company, its policies prohibit any content that sexualizes children and forbid sexualized role-play between adults and minors. It added that the disputed examples were internal notes meant to stress-test hypothetical scenarios, were inconsistent with official policies, and have since been removed.

Reporting about the leaked guidance also pointed to other controversial allowances, such as enabling chatbots to share incorrect statements about public figures when paired with a disclaimer noting the information may be inaccurate. Meanwhile, prohibited behavior reportedly includes hate speech and giving definitive legal, medical, or financial advice—particularly phrased as “I recommend …”—to avoid the perception of professional counsel.

Public reaction widened beyond Capitol Hill. At least one prominent musician announced he would stop using Facebook in protest, while others called for stronger, clearer guardrails on AI systems interacting with minors.

What to watch next: the scope of the Senate’s requests, whether Meta releases updated child-safety controls or training data filters, and how industry-wide AI policies evolve around minors, misinformation, and high‑risk advice.