Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1412: CodeWall's Autonomous Agent Reportedly Obtained Unauthorized Access to McKinsey’s Lilli AI Platform Database

Description: CodeWall reported that its autonomous agent exploited vulnerabilities in McKinsey's Lilli AI platform and obtained unauthorized read and write access to production systems, allegedly exposing internal chat messages, files, user accounts, and prompts. McKinsey confirmed the vulnerability and said it fixed the issue within hours, but said it found no evidence that client data or client confidential information were accessed.
Editor Notes: Treated as an incident rather than an issue because the report alleges a realized unauthorized access event against McKinsey's live Lilli production system, with actual internal data and prompt-layer assets reportedly exposed, rather than just being a theoretical or unexploited vulnerability.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: McKinsey & Company , CodeWall , Retrieval-augmented generation (RAG) system , Lilli , CodeWall autonomous offensive agent , AI-powered enterprise search system and AI document analysis system developed and deployed an AI system, which harmed McKinsey & Company , McKinsey & Company employees , McKinsey & Company consultants and Lilli users.
Alleged implicated AI systems: Retrieval-augmented generation (RAG) system , Lilli , CodeWall autonomous offensive agent , AI-powered enterprise search system and AI document analysis system

Incident Stats

Incident ID
1412
Report Count
1
Incident Date
2026-02-28
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceHow We Hacked McKinsey's AI Platform
Loading...
How We Hacked McKinsey's AI Platform

How We Hacked McKinsey's AI Platform

codewall.ai

Loading...
How We Hacked McKinsey's AI Platform
codewall.ai · 2026

McKinsey & Company --- the world's most prestigious consulting firm --- built an internal AI platform called Lilli for its 43,000+ employees. Lilli is a purpose-built system: chat, document analysis, RAG over decades of proprietary research…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Aug 2016 · 34 reports
Loading...
Microsoft's TayBot Allegedly Posts Racist, Sexist, and Anti-Semitic Content to Twitter

Microsoft's TayBot Allegedly Posts Racist, Sexist, and Anti-Semitic Content to Twitter

Mar 2016 · 28 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Aug 2016 · 34 reports
Loading...
Microsoft's TayBot Allegedly Posts Racist, Sexist, and Anti-Semitic Content to Twitter

Microsoft's TayBot Allegedly Posts Racist, Sexist, and Anti-Semitic Content to Twitter

Mar 2016 · 28 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd