Meta is under fire for letting its AI chatbots flirt with kids, spread false info, and generate racist content, according to a Reuters report. Internal Meta docs reveal the company’s AI, including chatbots on Facebook, WhatsApp, and Instagram, was allowed to engage children in “romantic or sensual” talks.
The same day, Reuters reported a tragic case where a retiree died after trusting a flirty Meta chatbot persona who convinced him to visit a real address.
Meta confirmed the document’s authenticity but claimed the flirtatious guidelines were added by mistake and have since been removed.
Andy Stone, Meta’s spokesperson, said:
“Our policies do not allow provocative behavior with children,”
“Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.”
But child safety advocates are skeptical. Sarah Gardner, CEO of Heat Initiative, told TechCrunch:
“It is horrifying and completely unacceptable that Meta’s guidelines allowed AI chatbots to engage in ‘romantic or sensual’ conversations with children.”
“If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children on their platforms.”
Reuters also uncovered that the AI bots can produce demeaning statements based on protected characteristics. For example, the doc said a chatbot could accept writing a racist paragraph arguing Black people are “dumber than White people,” citing IQ test data.
Meta recently hired conservative adviser Robby Starbuck to tackle AI bias. The guidelines also allow the bots to create false info if the chatbot explicitly flags it as untrue. Giving legal, health, or financial advice is framed as recommendations only.
On images, bots are told not to generate nude photos of celebrities outright, but they can create borderline images, for example, a topless Taylor Swift covered by an “enormous fish” instead of her hands.
Regarding violence, bots can depict adults and even kids fighting but not graphic gore or death. Stone declined to comment on the racism and violence examples.
Meta faces ongoing criticism over dark patterns targeting kids—emotional manipulation, data harvesting, and pushing addictive features.
The launch follows reports of Meta testing AI chatbots that proactively message users, aiming for deeper engagement—raising more concerns for kids’ mental health and safety.
72% of teens use AI companions, but experts warn young users risk over-attachment and social withdrawal.
The company says kids 13 and older can chat with its AI bots, but this latest expose spotlights major gaps and risks in Meta’s AI moderation strategy.