Journalists
影響を受けたインシデント
インシデント 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
もっとインシデント 106914 Report
Purported Graphite Spyware Linked to Paragon Solutions Allegedly Deployed Against Journalists and Civil Society Workers
2025-01-31
Researchers at Citizen Lab and Censys reportedly identified spyware infections involving Graphite, a tool attributed to Israeli firm Paragon Solutions. The spyware was allegedly deployed against civil society actors, including journalists and aid workers, through a zero-click WhatsApp exploit. WhatsApp notified over 90 targeted individuals. Evidence reportedly suggests deployments in multiple democratic countries.
もっとインシデント 9974 Report
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
2023-02-28
Court records reveal that Meta employees allegedly discussed pirating books to train LLaMA 3, citing cost and speed concerns with licensing. Internal messages suggest Meta accessed LibGen, a repository of over 7.5 million pirated books, with apparent approval from Mark Zuckerberg. Employees allegedly took steps to obscure the dataset’s origins. OpenAI has also been implicated in using LibGen.
もっとインシデント 10603 Report
Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025
2025-05-06
Researchers at the Institute for Strategic Dialogue (ISD) report that Operation Overload (also known as Matryoshka or Storm-1679) is a Russian-aligned campaign leveraging purported AI-generated voiceovers and visual impersonations to spread false or inflammatory content across platforms. The campaign reportedly involved at least 135 discrete posts analyzed by ISD in early 2025 targeting institutions and individuals, including one purported viral video claiming USAID funded celebrity trips to Ukraine (see Incident 1061).
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 9974 レポート
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
- インシデント 9952 レポート
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content