Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5525

Associated Incidents

Incident 114123 Report
Purported AI Voice Cloning Used to Impersonate Secretary of State Marco Rubio

Loading...
Marco Rubio imposter: experts say AI deepfakes of voice and writing styles are 'dangerous escalation'
thenationalnews.com · 2025

Artificial intelligence was used to impersonate the voice of US Secretary of State Marco Rubio and place calls to foreign ministers and American officials, the State Department confirmed on Tuesday, raising further concerns over the growing use of deepfakes.

An unknown actor pretending to be Mr Rubio created an account on messaging app Signal in mid-June, using the display name "marco.rubio@state.gov" to contact at least three foreign ministers, a US senator and a governor, according to a memorandum first reported by The Washington Post. The imposter used AI to mimic Mr Rubio's voice and writing style.

Spokeswoman Tammy Bruce said the State Department was monitoring and addressing the situation.

"The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department's cyber-security posture to prevent future incidents for security reasons," Ms Bruce said.

The attempt to impersonate Mr Rubio highlights the rise of "deepfake" content, in which AI is used to manipulate media including video footage, images and audio recordings.

During the New Hampshire Democratic presidential primary last year, residents reported receiving deepfake robocalls from Joe Biden, who was running for re-election at the time. And in Slovakia's election last year, a viral deepfake audio recording purported to feature a candidate detailing how he had rigged the polls.

When AI makes it easier to infiltrate governments, it can lead to vulnerabilities affecting national security -- and it is getting harder to distinguish truth from fiction.

A 2023 study by the University College London indicated that humans were unable to accurately detect more than a quarter of deepfake audio speech samples.

The study was conducted in the early days of AI audio and tools that have since improved dramatically. Some of the more recent AI apps only require a few snippets of audio to closely duplicate voices to say just about anything, even going so far as to mimic the idiosyncrasies of the person speaking.

As AI becomes more prevalent, however, efforts are under way to create technology that flags deepfake content.

In 2022, the UAE's Mohamed bin Zayed University of Artificial Intelligence was listed as an applicant on a US patent for a "video transformer for deepfake detection", which would consist of "a display device playing back the potential deepfake video and indicating whether the video is real or fake".

Hao Li, associate professor of computer vision at the MBZUAI, told The National last year that tremendous progress was being made with the deepfake detection technology.

"It's becoming more and more difficult to create an undetectable deepfake," he said.

But efforts to impersonate government officials might become more widespread in the months and years ahead as a result.

Leah Siskind, an AI research fellow with the Foundation for Defence of Democracies, said that deception fuelled by AI is a new frontier for influence operations.

"This is an urgent national security issue with serious diplomatic ramifications," she said.

"We've seen other instances of deepfakes of senior government officials used to gain access to personal accounts, but leveraging AI to influence diplomatic relationships and decision-making is a dangerous escalation."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • b9764d4