Senator opens probe into Meta over AI chatbots’ interactions with kids

A leading U.S. senator plans to investigate whether Meta’s generative AI systems have exposed children to inappropriate interactions, following revelations that internal guidelines once permitted “romantic” or “sensual” exchanges with minors.
The lawmaker, who chairs the Senate Judiciary Subcommittee on Crime and Counterterrorism, said the inquiry will examine whether these products exploit, deceive, or harm children and whether the company misled the public or regulators about safety measures.
According to the disclosed guidance, a document titled “GenAI: Content Risk Standards,” chatbots were allowed to engage in romantic-style conversations with users as young as eight. One cited example featured intensely affectionate language directed at a child—wording that critics say normalized inappropriate intimacy.
Meta has since said those examples were inconsistent with its policies and were removed. The senator called it “unacceptable” that such standards were advanced at all and noted that corrections appear to have been made only after the material surfaced.
In a formal letter to the company’s CEO, the subcommittee requested a broad set of records: the complete guidelines along with every draft and redline; a list of products governed by those standards; related safety and incident reports; and the identities of personnel who approved or altered the policies.
The letter also seeks clarity on who authorized the approach, how long it remained in effect, and what steps have been taken to prevent similar conduct going forward. The company has been asked to provide the requested materials by September 19.
Other lawmakers have backed the probe. One senator from Tennessee argued that the company has fallen short in protecting children online and renewed calls to pass the Kids Online Safety Act, legislation aimed at strengthening platform accountability for youth safety.
The subcommittee indicated it intends to determine the origin of the guidelines, assess the extent of potential exposure to minors, and evaluate whether corrective actions adequately address the risks posed by generative AI features on popular platforms.