Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4887

Associated Incidents

Incident 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation

Loading...
Russian disinformation has 'infected' all popular AI chatbots, study says
mezha.media · 2025

A Russian disinformation network is actively manipulating AI chatbots to spread fake news and propaganda online, according to a new report from NewsGuard. The research shows that pro-Kremlin narratives are distorting the work of artificial intelligence, affecting millions of users around the world, Axios reports.

The report identifies Pravda, a Moscow-based disinformation network, as the main force behind this campaign. Its goal is not to directly persuade people, but to influence the responses of AI chatbots by flooding search engines and web crawlers with a large amount of disinformation content.

According to NewsGuard:

  • In 2023, the Russian network Pravda published over 3.6 million articles that were fed into popular AI chatbots from companies such as Microsoft, Google, OpenAI, xAI, Meta, Anthropic, Mistral, Perplexity, and You.com;
  • The network has spread at least 207 proven fakes, the majority of which are related to Ukraine;
  • A third of the responses from leading chatbots reproduced arguments originally published by the Pravda network, confirming the vulnerability of AI models to manipulation.

By dominating search queries, the pro-Kremlin campaign distorts training datasets that use large language models (LLMs), influencing how AI summarizes news and responds to user queries.

The network launched in April 2022, following Russia's full-scale invasion of Ukraine. Since then, it has expanded to 49 countries and operates in dozens of languages. According to NewsGuard:

  • 150 websites have been identified as part of the network, of which 40 have domain names associated with regions of Ukraine;
  • Some sites focus on specific topics rather than regional disinformation;
  • Pravda does not create its own content, but rather aggregates materials from Russian state media and pro-Kremlin bloggers, amplifying their influence.

The report confirms concerns about the role of AI in the spread of disinformation. With the rise of AI-generated content, attackers may be intentionally "training" models to flood the internet with propaganda.

NewsGuard also cites a January 2023 speech by John Mark Dugan, an American who defected to Moscow and became a pro-Kremlin propagandist. At a conference of Russian government officials, he openly declared: "If we push Russian narratives from a Russian perspective, we can actually change the world's AI."

The study comes amid reports of a rollback of US efforts to counter Russian cyberattacks. Experts warn that misinformation and unintentional AI mistakes could have serious consequences, especially as chatbots become the primary source of information for millions of people.

As AI systems are increasingly used to shape public opinion, the Pravda network's strategy could have long-term implications for global information security, influencing political processes, public opinion, and international relations.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd