Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6681

Associated Incidents

Incident 11722 Report
Meta AI Bug in Deployed Service Reportedly Allowed Potential Access to Other Users' Prompts and Responses

Loading...
Meta AI had a privacy flaw that let users see other people's chats, hacker gets Rs 8.5 lakh for reporting it
indiatoday.in · 2025

Meta has reportedly fixed a significant security flaw in its AI chatbot platform that could have exposed users' private chats and AI-generated content to hackers. The issue was flagged by ethical hacker Sandeep Hodkasia, founder of security firm AppSecure. Hodkasia reported the vulnerability to Meta on 26 December 2024 and was awarded a bug bounty of $10,000 (approximately Rs 8.5 lakh) as a reward for privately disclosing the bug.

According to TechCrunch, Hodkasia discovered a bug in Meta's AI platform related to how it handled the prompt editing feature. When users interact with Meta AI, they can edit or regenerate their previous prompts. Each prompt and its AI-generated response are assigned a unique identification number (ID) by Meta's servers. Hodkasia found that these IDs were not only visible through browser tools but were also easily guessable.

Hodkasia explained that by manually changing the ID in his browser's network activity panel, he was able to access other users' private prompts and the responses generated by the AI. The real issue, he highlighted, was that Meta's system did not verify whether the person requesting to view the content was actually the one who had created it. This meant that any hacker could have written a simple script to automatically cycle through IDs and collect large amounts of sensitive content from other users without their authorisation.

Hodkasia revealed that it was this simplicity of the ID structure that made it dangerously easy for anyone with basic technical skills to exploit the flaw. The vulnerability essentially bypassed all user-specific access checks, exposing private AI interactions to malicious actors.

Following Hodkasia's discovery, Meta addressed the issue by rolling out a fix on 24 January 2025 and confirmed to TechCrunch that their internal investigation found no evidence that the bug had been misused or exploited.

While the issue has been fixed, this incident has also raised concerns around the security and privacy of AI chatbots, especially as companies rush to build and launch AI-powered products to compete in the space. Meta also launched its AI assistant and dedicated app earlier this year to challenge rivals like ChatGPT. However, in the past few months, the AI platform has come under fire for several other privacy-related missteps. Some users previously reported that their AI conversations were publicly viewable, despite assuming they were private.

Many users reported incidents where their own posts or the private conversations of others appeared in Meta AI's public Discovery feed. This raised serious privacy concerns. While Meta says that chats are private by default and only become public if users explicitly share them, users noted that the app's confusing settings and vague warnings have left many people unaware about the fact that their personal photos or prompts made to Meta AI could end up visible to others.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd