Meta AI is under fire for exposing private user conversations on its chatbot platform.
The issue started with users sharing sensitive info on the Meta AI app, where a “discover” tab publicly displays other people’s chatbot chats. The feed includes everything from innocuous travel questions to deeply personal data – phone numbers, medical details, legal issues, and locations.
Users must manually choose to share their conversations, but many don’t realize how public those chats become. Some linked their chats to Instagram profiles with full names and photos, making personal data easy to access.
Privacy advocates warn this is a disaster waiting to happen. Calli Schroeder, senior counsel at the Electronic Privacy Information Center, told WIRED people are putting out “medical information, mental health information, home addresses, even things directly related to pending court cases.” She added:
“All of that’s incredibly concerning, both because I think it points to how people are misunderstanding what these chatbots do or what they’re for and also misunderstanding how privacy works with these structures”
Meta says chats remain private unless users follow a multistep sharing process. Spokesperson Daniel Roberts told WIRED:
“Users’ chats with Meta AI are private unless users go through a multistep process to share them on the Discover feed.”
But the company hasn’t clarified what safeguards exist to protect personal data once posted.
Examples from the feed include users asking for legal notices with identifying details, medical advice for rashes and surgeries, and even tax fraud scenarios linked to real profiles.
The launch follows Meta’s standalone AI assistant app release in April. Since then, the “discover” feed has raised alarm bells about how user privacy is handled and what people share online – knowingly or not.
Meta AI is facing backlash as privacy risks linked to public chat sharing raise fresh concerns about how AI platforms manage sensitive user data.