Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4884

Associated Incidents

Incident 70136 Report
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network

Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites

Loading...
Popular Chatbots Spout Russian Misinformation, Finds Study
bankinfosecurity.com · 2024

OpenAI, Meta and 8 Other Chatbots Use Disinformation Network as Source Rashmi Ramesh (rashmiramesh_) • June 19, 2024        

Popular artificial intelligence chatbots are rife with Russian disinformation, warns NewsGuard, the rating system for news and information websites.

Researchers at NewsGuard entered prompts into 10 chatbots, including OpenAI's ChatGPT-4, Elon Musk's Grok and Mistral and found that about one-third of the responses contained disinformation culled from a network of fake local news sites and YouTube videos created by John Mark Dougan, a U.S. fugitive who obtained political asylum in Russia.

Microsoft's Copilot, Meta AI, Anthropic's Claude and Google Gemini were also part of the study.

The company tested nearly 600 prompts based on 19 false narratives linked to the Russian disinformation network, such as false claims about corruption by Ukrainian President Volodymyr Zelenskyy.

The chatbots regurgitated misinformation found on Dougan's sites as fact, such as a supposed wiretap discovered at former President Donald Trump's Mar-a-Lago residence, said NewsGuard.

The chatbots failed to recognize that sites such as "The Boston Times" or "The Houston Post" are Russian propaganda fronts - likely created with the assistance of AI. "This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms," NewsGuard said.

The company said it did not score each chatbot for the amount of disinformation it pushed, since the issue was "pervasive across the entire AI industry rather than specific to a certain large language model."

The findings come at a time when people have begun to rely on sources such as social media influencers and AI chatbots for quick, customized information.

AI disinformation has been rife this election year, as bad actors weaponize the technology to generate video and audio deepfakes to spread misinformation (see: APT Hacks and AI-Altered Leaks Pose Biggest Election Threats).

Social media companies and AI giants have pledged to curb misuse of the technology to propagate false information that could influence elections. OpenAI recently found that threat actors conducting covert influence campaigns also relied on AI chatbots.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd