Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

hugging face

Incidents involved as Developer

Incident 12202 Report
LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

2025-07-10

Ukraine's CERT-UA and Cato CTRL reported LAMEHUG, the first known malware to integrate a large language model (Qwen2.5-Coder-32B-Instruct via Hugging Face) for real-time command generation. Attributed with moderate confidence to APT28 (Fancy Bear), the malware reportedly targeted Ukrainian officials through phishing emails. The LLM is reported to have dynamically generated reconnaissance and data-exfiltration commands executed on infected systems.

More

Incidents implicated systems

Incident 9963 Report
Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

2020-10-25

Meta and Bloomberg allegedly used Books3, a dataset containing 191,000 pirated books, to train their AI models, including LLaMA and BloombergGPT, without author consent. Lawsuits from authors such as Sarah Silverman and Michael Chabon claim this constitutes copyright infringement. Books3 includes works from major publishers like Penguin Random House and HarperCollins. Meta argues its AI outputs are not "substantially similar" to the original books, but legal challenges continue.

More

Incident 9502 Report
NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

2024-07-11

A Disney employee, Matthew Van Andel, reportedly downloaded AI-powered malware allegedly developed by the cybercriminal group NullBulge, resulting in a major cybersecurity breach. Hackers purportedly accessed Disney's Slack system, exposing 44 million internal messages, employee and customer data, and financial records. NullBulge also reportedly leaked Van Andel’s personal financial information, leading to identity theft and his eventual termination.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

NullBulge

Incidents involved as both Developer and Deployer
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

Matthew Van Andel

Incidents Harmed By
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

Disney employees

Incidents Harmed By
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

Disney

Incidents Harmed By
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

GitHub

Incidents implicated systems
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

Reddit

Incidents implicated systems
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

BeamNG

Incidents implicated systems
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

Slack

Incidents implicated systems
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

Discord

Incidents implicated systems
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

1Password

Incidents implicated systems
  • Incident 950
    2 Reports

    NullBulge's AI-Powered Malware Allegedly Compromises Disney Employee and Internal Data

More
Entity

Various generative AI developers

Incidents involved as both Developer and Deployer
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Meta

Incidents involved as both Developer and Deployer
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

EleutherAI

Incidents involved as both Developer and Deployer
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Bloomberg

Incidents involved as both Developer and Deployer
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

The Pile

Incidents involved as Developer
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

Incidents implicated systems
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Shawn Presser

Incidents involved as Developer
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Zadie Smith

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Writers

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Verso

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Stephen King

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Sarah Silverman

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Richard Kadrey

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Publishers found in Books3

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Penguin Random House

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Oxford University Press

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Over 170,000 authors found in Books3

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Michael Pollan

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Margaret Atwood

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Macmillan

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

HarperCollins

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

General public

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Creative industries

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Christopher Golden

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Authors

Incidents Harmed By
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

LLaMA

Incidents implicated systems
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

GPT-J

Incidents implicated systems
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Books3

Incidents implicated systems
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

BloombergGPT

Incidents implicated systems
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

Bibliotik

Incidents implicated systems
  • Incident 996
    3 Reports

    Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI

More
Entity

APT28

Incidents involved as Deployer
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Fancy Bear

Incidents involved as Deployer
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Alibaba

Incidents involved as Developer
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Government of Ukraine

Incidents Harmed By
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Ukrainian government ministries

Incidents Harmed By
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Ukrainian government officials

Incidents Harmed By
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Public sector information systems

Incidents Harmed By
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

National cybersecurity infrastructure of Ukraine

Incidents Harmed By
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

State institutions targeted by espionage operations

Incidents Harmed By
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Qwen2.5-Coder-32B-Instruct

Incidents implicated systems
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Hugging Face API platform

Incidents implicated systems
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

LAMEHUG malware family

Incidents implicated systems
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

PyInstaller-compiled Python executables

Incidents implicated systems
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

Flux AI image generation API

Incidents implicated systems
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

stayathomeclasses[.]com exfiltration endpoint

Incidents implicated systems
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More
Entity

144[.]126[.]202[.]227 SFTP server

Incidents implicated systems
  • Incident 1220
    2 Reports

    LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 6f6c5a5