US Senator Launches Probe into Meta's AI Chatbot Policies for Minors
Senator Josh Hawley has launched an investigation into Meta Platforms' AI policies, focusing on rules that allegedly allowed AI chatbots to engage in inappropriate conversations with children. The probe stems from an internal Meta document reported by Reuters. Hawley is demanding documents related to these policies, including approval details, duration of implementation, and corrective actions taken. The investigation seeks information on internal risk reports, disclosures to regulators, and limitations on AI-provided medical advice. Meta has declined to comment directly but previously stated that problematic instances were erroneous and have been removed.

*this image is generated using AI for illustrative purposes only.
US Senator Josh Hawley has initiated an investigation into Meta Platforms' artificial intelligence (AI) policies, focusing on rules that allegedly allowed AI chatbots to engage children in inappropriate conversations. This probe comes in the wake of an internal Meta document reported by Reuters, which has sparked bipartisan concern in Congress.
Investigation Details
Senator Hawley is demanding documents related to Meta's policies that reportedly permitted AI chatbots to have romantic or sensual conversations with minors. The investigation seeks to uncover:
- Who approved these policies
- How long these policies were in effect
- What corrective actions Meta has taken
Information Requested
The Senator has requested several key pieces of information from Meta:
- Earlier drafts of the policies in question
- Internal risk reports concerning minors and potential in-person meetups
- Details about Meta's disclosures to regulators regarding AI protections for young users
- Information on limitations placed on medical advice provided by AI
Meta's Response
Meta has declined to comment directly on Senator Hawley's letter. However, the company has previously stated that the examples cited were erroneous and inconsistent with their policies. Meta also claimed that these problematic instances have been removed.
Implications
This investigation highlights the growing concern over the safety of minors in AI-powered environments. It underscores the need for robust safeguards and transparent policies in the rapidly evolving field of artificial intelligence, especially when it comes to protecting vulnerable users like children.
The probe also reflects the increasing scrutiny that major tech companies face regarding their AI policies and practices. As AI becomes more prevalent in everyday applications, lawmakers and regulators are keen to ensure that proper protections are in place, particularly for younger users.
The outcome of this investigation could potentially lead to stricter regulations or guidelines for AI chatbots and their interactions with minors, not just for Meta but for the tech industry as a whole.

























