Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5524

Associated Incidents

Incident 114123 Report
Purported AI Voice Cloning Used to Impersonate Secretary of State Marco Rubio

Loading...
Had to happen: Deepfake Marco Rubio calling U.S. officials
mindmatters.ai · 2025

From Fox News, we learn this morning that an imposter used AI technology to pretend to be U.S. Secretary of State Marco Rubio and communicate with elected and appointed officials. Greg Norman reports,

The mystery individual posing as one of President Donald Trump's Cabinet members reached out to foreign ministers, a U.S. governor and a member of Congress with AI-assisted voice and text messages that mimicked Rubio's voice and writing style, the Washington Post reported, citing a senior U.S. official and State Department cable...

The cable reportedly said the impersonation act started in mid-June when someone created a Signal account with the display name Marco.Rubio@state.gov --- which isn't Rubio's actual email address.

The July 3 cable reportedly added that the fake Rubio "contacted at least five non-Department individuals, including three foreign ministers, a U.S. governor, and a U.S. member of Congress." "State Department investigating Rubio AI impersonator who contacted US, foreign officials," July 9, 2025

A senior U.S. official described the attempt as "not very sophisticated." However, if past experience is any guide, such attempts at deep faking will become more sophisticated over time.

Deep Media, a deepfake detection company, offers comparatively up-to-date information (February 17, 2025) on the rapid growth of deepfakes: "Misinformation Campaigns: Political deepfakes surged 300 percent in the last two years, impacting elections and public discourse."

Unfortunately, the pointers offered for detecting them without electronic assistance don't sound very reassuring. For example, "Deepfake voices often have slight robotic distortions or unnatural intonations." and "Watch for audio that does not perfectly align with lip movements."

Sure, and how long before the fakers correct those faults?

Fortunately, electronic methods can provide insight:

Reverse image and video searches can reveal whether the content has been altered or AI-generated.

AI detection tools analyze digital fingerprints for synthetic patterns. "How to Detect Deepfakes in 2025: The Growing Challenge of AI-Generated Manipulation"

Such giveaways would be harder to simply "correct" because they are built into the nature of the technology. And, not surprisingly, Deep Media offers a detection tool, warning that "As deepfakes become more advanced, traditional verification methods will not be enough. In 2025, real-time AI detection will become the industry standard for authentication."

That's likely a correct assessment and Deep Media will soon have lots of competitors in a world where seeing is no longer believing, as the proverb used to say.

Deepfake detection should be a standard offering in computer systems.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd