Description: A security researcher reported a vulnerability in Meta AI's deployed chatbot service that, under certain conditions, could allow an unauthorized user to view another user's prompts and AI-generated responses. The flaw reportedly involved guessable prompt IDs and insufficient server-side authorization checks. Meta reportedly fixed the issue in January 2025 and found no evidence of malicious exploitation, awarding the researcher a bug bounty.
Editor Notes: Timeline notes: The reported bug was filed 12/26/2024. Meta reportedly paid the security researcher who discovered the vulnerability, Sandeep Hodkasia, $10,000 for the bug bounty, and implemented the fix on 01/24/2025. Reporting on the incident arose in mid-July 2025, and it was ingested as a new incident ID on 08/15/2025.
Entities
View all entitiesAlleged: Meta and Meta AI developed and deployed an AI system, which harmed Meta users and General public.
Alleged implicated AI system: Meta AI
Incident Stats
Incident ID
1172
Report Count
1
Incident Date
2024-12-26
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
Meta has fixed a security bug that allowed Meta AI chatbot users to access and view the private prompts and AI-generated responses of other users.
Sandeep Hodkasia, the founder of security testing firm AppSecure, exclusively told TechCrunch…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?