Meta AI is facing backlash over privacy slip-ups. Users’ prompts and AI responses are appearing on a public feed—sometimes linked directly to their social profiles.
People could have their search history, including sensitive or embarrassing queries, exposed without realizing it.
The issue started as users share posts voluntarily, but these posts become fully public. Meta AI pops up a warning: “Prompts you post are public and visible to everyone… Avoid sharing personal or sensitive information.” Still, plenty of private details are slipping through.
The BBC spotted users uploading test questions and asking Meta AI for answers. Others requested risqué images of scantily-clad characters. Some searches are traceable back to Instagram accounts via usernames and pictures. TechCrunch found intimate medical questions posted publicly too, like treatment advice for rash.
Meta AI launched earlier this year. It’s accessible on Facebook, Instagram, WhatsApp, and as a separate app with a “Discover” feed where users can share AI prompts. Users can opt out of sharing, but it’s unclear if most know this.
Rachel Tobac, CEO of Social Proof Security, flagged the problem on X:
“If a user’s expectations about how a tool functions don’t match reality, you’ve got yourself a huge user experience and security problem.”
“Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked.”
The launch followed Meta’s April claim that “nothing is shared to your feed unless you choose to post it.” But the discovery feed’s social vibe is causing confusion about what stays private.
Meta has been contacted for comment.