Various generative AI developers
Incidentes involucrados como desarrollador e implementador
Incidente 9962 Reportes
Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI
2020-10-25
Meta and Bloomberg allegedly used Books3, a dataset containing 191,000 pirated books, to train their AI models, including LLaMA and BloombergGPT, without author consent. Lawsuits from authors such as Sarah Silverman and Michael Chabon claim this constitutes copyright infringement. Books3 includes works from major publishers like Penguin Random House and HarperCollins. Meta argues its AI outputs are not "substantially similar" to the original books, but legal challenges continue.
MásIncidents involved as Developer
Incidente 9947 Reportes
AI-Enabled Organized Crime Expands Across Europe
2025-03-18
Europol’s EU Serious and Organised Crime Threat Assessment (EU-SOCTA) 2025 warns that AI is accelerating the growth of organized crime throughout Europe. Criminal networks are leveraging AI for cyber fraud, ransomware, money laundering, and child exploitation, while AI-powered social engineering and automation are making criminal operations more scalable and harder to detect.
MásIncidente 10375 Reportes
Microsoft Reportedly Blocks 1.6 Million Bot Signup Attempts Per Hour Amid Global AI-Driven Fraud Surge
2025-04-16
Between April 2024 and April 2025, Microsoft reportedly blocked 1.6 million bot signups per hour and disrupted $4 billion in fraud attempts linked to AI-enhanced scams. The company's Cyber Signals report details how generative AI is being used to fabricate realistic e-commerce sites, job offers, customer service bots, and phishing lures. Fraud actors now automate mass-deceptive campaigns with fake reviews, deepfakes, and cloned brand domains at unprecedented scale and speed.
MásIncidente 10603 Reportes
Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025
2025-05-06
Researchers at the Institute for Strategic Dialogue (ISD) report that Operation Overload (also known as Matryoshka or Storm-1679) is a Russian-aligned campaign leveraging purported AI-generated voiceovers and visual impersonations to spread false or inflammatory content across platforms. The campaign reportedly involved at least 135 discrete posts analyzed by ISD in early 2025 targeting institutions and individuals, including one purported viral video claiming USAID funded celebrity trips to Ukraine (see Incident 1061).
Más