Description: A security researcher reported a vulnerability in Meta AI's deployed chatbot service that, under certain conditions, could allow an unauthorized user to view another user's prompts and AI-generated responses. The flaw reportedly involved guessable prompt IDs and insufficient server-side authorization checks. Meta reportedly fixed the issue in January 2025 and found no evidence of malicious exploitation, awarding the researcher a bug bounty.
Editor Notes: Timeline notes: The reported bug was filed 12/26/2024. Meta reportedly paid the security researcher who discovered the vulnerability, Sandeep Hodkasia, $10,000 for the bug bounty, and implemented the fix on 01/24/2025. Reporting on the incident arose in mid-July 2025, and it was ingested as a new incident ID on 08/15/2025.
Entities
View all entitiesAlleged: Meta and Meta AI developed and deployed an AI system, which harmed Meta users and General public.
Alleged implicated AI system: Meta AI
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
2.1. Compromise of privacy by obtaining, leaking or correctly inferring sensitive information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Privacy & Security
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
Meta has fixed a security bug that allowed Meta AI chatbot users to access and view the private prompts and AI-generated responses of other users.
Sandeep Hodkasia, the founder of security testing firm AppSecure, exclusively told TechCrunch…
Loading...

Meta has reportedly fixed a significant security flaw in its AI chatbot platform that could have exposed users' private chats and AI-generated content to hackers. The issue was flagged by ethical hacker Sandeep Hodkasia, founder of security…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?



