Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1174: Microsoft Copilot Reportedly Able to Access Cached Data from Since-Private GitHub Repositories

Description: Lasso Security reported that Microsoft Copilot could return content from GitHub repositories that had been public briefly but later set to private or deleted. Lasso attributed this to Bing's caching system, which stored "zombie data" from over 20,000 repositories. The cached content allegedly included sensitive information such as access keys, tokens, and internal packages. Microsoft reportedly classified the issue as low severity and applied only partial mitigations.
Editor Notes: Timeline notes: This incident ID date is marked 02/26/2025 because the bulk of reporting centered on Lasso Security's investigation emerged at that time. (Lasso's report is dated 02/27/2025, though.) However, Lasso cites an August 2024 LinkedIn post by Zachary Horton identifying the problem months before significant press coverage: https://www.linkedin.com/posts/zak-horton_github-ai-privacy-activity-7225764812117487616-YcGP. The incident ID was created 08/15/2025.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Microsoft , GitHub , Microsoft Copilot and Bing developed and deployed an AI system, which harmed GitHub users , GitHub repositories and GitHub.
Alleged implicated AI systems: GitHub , Microsoft Copilot and Bing

Incident Stats

Incident ID
1174
Report Count
2
Incident Date
2025-02-26
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Thousands of exposed GitHub repositories, now private, can still be accessed through Copilot
Wayback Copilot: Using Microsoft's Copilot to Expose Thousands of Private GitHub Repositories
Loading...
Thousands of exposed GitHub repositories, now private, can still be accessed through Copilot

Thousands of exposed GitHub repositories, now private, can still be accessed through Copilot

techcrunch.com

Loading...
Wayback Copilot: Using Microsoft's Copilot to Expose Thousands of Private GitHub Repositories

Wayback Copilot: Using Microsoft's Copilot to Expose Thousands of Private GitHub Repositories

lasso.security

Loading...
Thousands of exposed GitHub repositories, now private, can still be accessed through Copilot
techcrunch.com · 2025

Security researchers are warning that data exposed to the internet, even for a moment, can linger in online generative AI chatbots like Microsoft Copilot long after the data is made private.

Thousands of once-public GitHub repositories from…

Loading...
Wayback Copilot: Using Microsoft's Copilot to Expose Thousands of Private GitHub Repositories
lasso.security · 2025

In August 2024, we encountered a LinkedIn post claiming that OpenAI was training on, and exposing, data from private GitHub repositories. Given the seriousness of this claim, our research team immediately set out to investigate.‍

A quick se…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
Loading...
Bug in Facebook’s Anti-Spam Filter Allegedly Blocked Legitimate Posts about COVID-19

Bug in Facebook’s Anti-Spam Filter Allegedly Blocked Legitimate Posts about COVID-19

Mar 2020 · 1 report
Loading...
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
Loading...
Bug in Facebook’s Anti-Spam Filter Allegedly Blocked Legitimate Posts about COVID-19

Bug in Facebook’s Anti-Spam Filter Allegedly Blocked Legitimate Posts about COVID-19

Mar 2020 · 1 report
Loading...
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • b9764d4