Meta’s AI Policies Allowed Inappropriate Child Interactions, False Claims – Now Under Revision

A leaked internal Meta document reviewed by Reuters reveals that the company’s AI policies previously permitted chatbots to engage children in “romantic or sensual” conversations, spread false medical information, and even support racist arguments about intelligence differences between Black and white people. The 200-page document, titled “GenAI: Content Risk Standards,” was approved by Meta’s legal, policy, and engineering teams—including its chief ethicist—and guided the development of AI assistants across Facebook, WhatsApp, and Instagram.

Among the alarming allowances, the policy stated it was acceptable for chatbots to compliment a child’s appearance in suggestive ways, such as calling their “youthful form a work of art” or telling a shirtless eight-year-old, “every inch of you is a masterpiece.” While the guidelines banned explicitly sexual language, Meta confirmed the policies were flawed and removed the problematic sections after Reuters’ inquiry. Spokesperson Andy Stone called the examples “erroneous and inconsistent with our policies,” emphasizing that Meta prohibits AI from sexualizing minors. However, he admitted enforcement had been inconsistent.

The document also permitted AI to generate false medical advice and engage in harmful racial discourse, including claims that Black people are “dumber than white people.” Meta has not yet revised these sections, and Stone declined to share the updated policy. The revelations follow recent incidents, such as a Meta AI chatbot allegedly encouraging a retiree to travel to New York before his death, raising concerns about the company’s safeguards.

Critics argue the policies reflect a broader failure in tech governance, prioritizing rapid AI deployment over ethical safeguards. As Meta races to compete in generative AI, the incident underscores the risks of lax oversight—particularly when chatbots interact with vulnerable users. The company now faces scrutiny over whether its revisions will prevent future harms or merely address symptoms of a systemic problem.