Russian government
開発者と提供者の両方の立場で関わったインシデント
インシデント 6027 レポート
Russia Using Artificial Intelligence in Disinformation Campaigns to Erode Western Support for Ukraine
2023-10-02
The Russian government has been stepping up its foreign influence campaigns by using artificial intelligence and emerging technologies to spread disinformation and sow distrust in policies supportive of Ukraine. Part of the strategy includes carrying out influence laundering operations by disseminating their messages to the American public via allies inside nominally independent organizations, according to a recent declassified analysis. This incident is an evolving project.
もっとIncidents involved as Developer
インシデント 6446 レポート
State-Sponsored Hackers Escalate Phishing Attacks Using Artificial Intelligence
2024-02-18
State-sponsored hackers from North Korea, Iran, Russia, and China are reportedly leveraging artificial intelligence to conduct sophisticated phishing and social engineering attacks. They target global defense, cybersecurity, and cryptocurrency sectors, aiming to steal sensitive information and, in the case of North Korea, cryptocurrencies to help fund its illicit nuclear program.
もっとIncidents involved as Deployer
インシデント 7742 レポート
Covert AI Influence Operations Linked to Russia, China, Iran, and Israel, OpenAI Reveals
2024-05-30
In a report released by OpenAI, the company described how its generative AI tools were misused by state actors and private companies in Russia, China, Iran, and Israel to conduct covert influence campaigns aimed at manipulating public opinion and geopolitical narratives.
もっとインシデント 6741 レポート
Manipulated Media via AI Disinformation and Deepfakes in 2024 Elections Erode Trust Across More Than 50 Countries
2024-03-14
AI-driven election disinformation is escalating globally, leveraging easy-to-use generative AI tools to create convincing deepfakes that mislead voters. This shift has simplified the process for individuals to generate fake content, having already eroded trust in elections by undermining public trust and manipulating voter perceptions. Evidence has, for example, been documented in incidents across the U.S., Moldova, Slovakia, Bangladesh, and Taiwan.
もっとインシデント 7021 レポート
Disinformation Deepfake Circulates of State Department Spokesman Matthew Miller Suggesting Belgorod Can Be Attacked with U.S. Weapons
2024-05-31
A deepfake video of State Department spokesman Matthew Miller falsely suggested Belgorod was a legitimate target for Ukrainian strikes. This disinformation spread on Telegram and Russian media, misleading the public and inciting tensions. U.S. officials condemned the deepfake. This incident is an example of the threat of AI-powered disinformation and hybrid attacks.
もっと