Meta Resolves Issue Exposing Users’ AI Prompts and Generated Content

Meta negotiations with moderators in Kenya over labor dispute collapse Meta negotiations with moderators in Kenya over labor dispute collapse

Meta has patched a security flaw in its AI chatbot that exposed users’ private prompts and AI-generated replies to others.

The bug was privately reported by Sandeep Hodkasia, founder of security firm Appsecure, who snagged a $10,000 bug bounty from Meta. He filed the bug on December 26, 2024, and Meta fixed it on January 24, 2025. Officials say no evidence points to malicious use.

The issue stemmed from how Meta AI assigns a unique number to each prompt and response when users edit them. Hodkasia found he could tweak this number in his browser’s network traffic to pull up conversations from other users.

Advertisement

This happened because Meta’s servers didn’t properly verify if a user had permission to view that prompt and response. Those unique IDs were "easily guessable," letting someone scrape private prompts by cycling through numbers with automation.

Meta confirmed the fix and rewarded Hodkasia but declined to comment on user impact.

“Meta found no evidence of abuse and rewarded the researcher,” Meta spokesperson Ryan Daniels told TechCrunch.

The flaw hits as tech giants race to roll out AI despite privacy and security hurdles. Meta AI has had other troubles, like users accidentally sharing what they thought were private chats publicly.

This bug exposes how shaky behind-the-scenes AI security still can be — and why early AI app launches are often bumpy.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement