Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1471: Meta Internal AI Agent Reportedly Gave Advice That Allegedly Exposed Sensitive Data to Unauthorized Employees

Description: Reporting alleged that a Meta internal AI agent, purportedly similar to OpenClaw, posted inaccurate technical advice to an internal forum without approval. An employee reportedly followed the advice, allegedly causing an SEV1 incident in which sensitive company and user data became accessible to unauthorized employees for nearly two hours.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Meta , Meta internal AI agent and Agentic AI system developed and deployed an AI system, which harmed Meta and Meta users.
Alleged implicated AI systems: Meta internal AI agent and Agentic AI system

Incident Stats

Incident ID
1471
Report Count
1
Incident Date
2026-03-18
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceA rogue AI led to a serious security incident at Meta
Loading...
A rogue AI led to a serious security incident at Meta

A rogue AI led to a serious security incident at Meta

theverge.com

Loading...
A rogue AI led to a serious security incident at Meta
theverge.com · 2026

For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clay…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Uber Autonomous Cars Running Red Lights

Uber Autonomous Cars Running Red Lights

Aug 2014 · 9 reports
Loading...
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
Loading...
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Uber Autonomous Cars Running Red Lights

Uber Autonomous Cars Running Red Lights

Aug 2014 · 9 reports
Loading...
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
Loading...
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2026 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 537d5f5