Various generative AI developers
Incidents impliqués en tant que développeur et déployeur
Incident 9963 Rapports
Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI
2020-10-25
Meta and Bloomberg allegedly used Books3, a dataset containing 191,000 pirated books, to train their AI models, including LLaMA and BloombergGPT, without author consent. Lawsuits from authors such as Sarah Silverman and Michael Chabon claim this constitutes copyright infringement. Books3 includes works from major publishers like Penguin Random House and HarperCollins. Meta argues its AI outputs are not "substantially similar" to the original books, but legal challenges continue.
PlusIncidents involved as Developer
Incident 9947 Rapports
AI-Enabled Organized Crime Expands Across Europe
2025-03-18
Europol’s EU Serious and Organised Crime Threat Assessment (EU-SOCTA) 2025 warns that AI is accelerating the growth of organized crime throughout Europe. Criminal networks are leveraging AI for cyber fraud, ransomware, money laundering, and child exploitation, while AI-powered social engineering and automation are making criminal operations more scalable and harder to detect.
PlusIncident 10375 Rapports
Microsoft Reportedly Blocks 1.6 Million Bot Signup Attempts Per Hour Amid Global AI-Driven Fraud Surge
2025-04-16
Between April 2024 and April 2025, Microsoft reportedly blocked 1.6 million bot signups per hour and disrupted $4 billion in fraud attempts linked to AI-enhanced scams. The company's Cyber Signals report details how generative AI is being used to fabricate realistic e-commerce sites, job offers, customer service bots, and phishing lures. Fraud actors now automate mass-deceptive campaigns with fake reviews, deepfakes, and cloned brand domains at unprecedented scale and speed.
PlusIncident 10603 Rapports
Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025
2025-05-06
Researchers at the Institute for Strategic Dialogue (ISD) report that Operation Overload (also known as Matryoshka or Storm-1679) is a Russian-aligned campaign leveraging purported AI-generated voiceovers and visual impersonations to spread false or inflammatory content across platforms. The campaign reportedly involved at least 135 discrete posts analyzed by ISD in early 2025 targeting institutions and individuals, including one purported viral video claiming USAID funded celebrity trips to Ukraine (see Incident 1061).
PlusIncident 12803 Rapports
Reported Use of AI Voice and Identity Manipulation in the Ongoing 'Phantom Hacker' Fraud Scheme
2023-10-20
Reports allege that updated variants of the long-running "Phantom Hacker" scam use purported AI tools to enhance impersonation, including voice cloning, spoofed caller ID, and realistic digital artifacts. Fraudsters reportedly pose as tech support, bank staff, and government officials in a three-phase scheme that pressures mostly older adults to transfer funds to accounts controlled by scammers.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
Truth
Affecté par des incidents
- Incident 10603 Report
Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025
- Incident 12831 Report
Purported AI-Enabled Pro-Russian Influence Campaign Centered on Burkina Faso's Ibrahim Traoré and Disseminated Across African Media
Journalists
Affecté par des incidents
- Incident 10603 Report
Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025
- Incident 12831 Report
Purported AI-Enabled Pro-Russian Influence Campaign Centered on Burkina Faso's Ibrahim Traoré and Disseminated Across African Media
Democracy
Affecté par des incidents
- Incident 10603 Report
Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025
- Incident 12831 Report
Purported AI-Enabled Pro-Russian Influence Campaign Centered on Burkina Faso's Ibrahim Traoré and Disseminated Across African Media
National security and intelligence stakeholders
Affecté par des incidents
- Incident 10603 Report
Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025
- Incident 12831 Report
Purported AI-Enabled Pro-Russian Influence Campaign Centered on Burkina Faso's Ibrahim Traoré and Disseminated Across African Media