Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5520

Associated Incidents

Incident 114123 Report
Purported AI Voice Cloning Used to Impersonate Secretary of State Marco Rubio

Loading...
Deepfake criminals impersonate Marco Rubio to uncover government secrets
malwarebytes.com · 2025

Deepfake attacks aren't just for recruitment and banking fraud; they've now reached the highest levels of government. News emerged this week of an AI-powered attack that impersonated US Secretary of State Marco Rubio. Authorities don't know who was behind the incident.

A US State Department cable seen by the Washington Post warned that someone impersonated Rubio's voice and writing style in voice and text messages on the Signal messaging app. The attacker reportedly tried to gain access to information or accounts by contacting multiple government officials in Rubio's name. Their targets included three foreign ministers, a US governor, and a US member of Congress, the cable said.

The attacker created a Signal account with the display name 'Marco.Rubio@state.gov' and invited targets to communicate on Signal.

The AI factor in the attacks likely refers to deepfakes. These are a form of digital mimicry, in which attackers use audio or visual footage of a person to create convincing audio or images of them. Many have even created fake video of their targets, using them for deepfake pornography or to impersonate businesspeople.

The Rubio deepfake isn't the first time that impersonators have targeted government officials. In May, someone impersonated White House Chief of Staff Susie Wiles in calls and texts to her contacts. Several failed to spot the scam initially and interacted with the attacker as though the conversations were legitimate.

This incident wasn't Rubio's fault, attacks like these are becoming commonplace with scammers making use of popular messaging tools. Signal is apparently a widely-used app in the executive branch, to the point that Director of National Intelligence Tulsi Gabbard said it came pre-installed on government devices.

This Signal usage culminated in then-national security advisor Mike Waltz accidentally adding a journalist to a group Signal chat containing discussions plans to bomb Yemen. He is now no longer the national security advisor. Misuse of the app extends back to the previous administration, when the Pentagon was forced to release a memo about it.

Why should you worry about such attacks on government high-ups? For one thing, it's scary to think that foreign states might actually get away with sensitive information this way. But it also shows how easy it can be to impersonate someone with a deepfake. You can mount audio attacks with just a few snippets of audio to train an algorithm on.

You'd be suspicious if Pamela Bondi entered your book club chat, but if someone called an elderly relative pretending to be you, saying you'd been involved in an accident, or begging for ransom money because you'd been kidnapped, would they fall for it? Several have.

Strange though it may seem, modern threats demand some old-school protections. We recommend sharing a family password with close members, who can then request it to confirm each others' identity. Never send this password anywhere, keep it to yourselves and agree to it in person.

But even family passwords won't stop your grandma being targeted in deepfake romance scams from fake Mark Ruffalos and Brad Pitts, though. A quiet chat to explain the threats might avert such disasters, though, along with a regular check-in to ensure your less tech-savvy loved ones are safe and sound.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd