Associated Incidents
The case of a threat actor using artificial intelligence (AI) software to pose as Secretary of State Marco Rubio in communications with foreign and domestic diplomats as well as politicians demonstrates the growing sophistication of deepfake technology and the increased threat it poses to national security, according to security experts.
The Washington Post reported yesterday that an imposter sent voice and text messages mimicking Rubio's voice and writing style using AI-powered software, citing a senior US official and a State Department cable obtained by the publication. The impersonator used standard mobile text messaging as well as the encrypted messaging app Signal in the campaign, which began in mid-June, according to the report.
The imposter created a Signal account with the display name Marco.Rubio@state.gov --- which is not an email address associated with the secretary of state --- and used it to contact foreign ministers, a US governor, and a member of Congress with the goal of gaining access to information and accounts. Other unspecific State Department officials also were impersonated in the campaign, according to the report.
When questioned during a press briefing on July 8, State Department spokesperson Tammy Bruce said that the department is aware of the incident and "is currently monitoring and addressing the matter," but declined to go into further details "for security reasons." It's unclear where the attack originated from, though there is some speculation that Russian adversaries are behind it, one expert noted.
"The department takes seriously its responsibility to safeguard its information and continuously takes steps to improve the department's cybersecurity posture to prevent future incidents," Bruce said during the briefing. "We live in a technological age that we are well enmeshed in, and I'll leave it at that."
US Government in Cybersecurity Crisis?
The incident is at least the third deepfake scheme targeting a US government official. Previously, an attacker managed to gain access to Sen. Ben Cardin (D-Md.) by posing as a Ukrainian official, and deepfake robocalls impersonated former President Joe Biden in a political campaign against him. Moreover, the FBI warned in May that malicious actors were using AI-generated voice messages impersonating senior US officials to target other senior government leaders and their contacts.
Given the growing threat from sophisticated impersonation campaigns, cybersecurity should indeed be a priority for the federal government when such high-profile incidents occur, one expert said. These incidents demonstrate a serious security breach that undermines public trust and highlights issues the feds face in securing their own official communications and infrastructure from outside threats, Aditya Sood, vice president of security engineering and AI strategy at Aryaka, said in an emailed statement.
Related:China-Backed Salt Typhoon Hacks US National Guard for Nearly a Year
"These scams outpace traditional detection methods, exploiting gaps in platform moderation and regulatory oversight," he said in an emailed statement. "The proliferation of AI has only exacerbated this issue, with advanced threat actors capitalizing on unprepared organizations."
The federal government under the Trump administration might be characterized as one such organization, as it already has faced criticism over a major operational security faux pas in March when Secretary of Defense Pete Hegseth accidentally texted a journalist --- Jeffrey Goldberg, editor-in-chief of The Atlantic --- precise plans via Signal regarding the US's plans to bomb Houthi targets in Yemen just hours before the attacks occurred.
Deepfake Advancements Require Immediate Action
The incident is also another reminder of how sophisticated deepfake technology has become, noted Steve Cobb, chief information security officer (CISO) at SecurityScorecard, who expects more incidents against government officials to follow.
"These campaigns typically employ a multipronged approach, starting with phishing attacks sent from seemingly legitimate email accounts and escalating to AI-generated deepfake voicemails," he said in an emailed statement. "This is not the first time threat actors have impersonated state officials, and it likely won't be the last."
The threat from these incidents and the success that attackers may have in gleaning classified or confidential information should be a call to action for the government to implement AI-powered detection tools to identify manipulated media and impersonation attempts, a step that social media platforms already have taken, Sood said.
Like all organizations, the government should enact a multipronged approach that combines proactive media literacy education to empower the public to assess content critically; robust technical solutions, such as real-time detection, content provenance standards, and cryptographic authentication, to verify media authenticity; and strong legal frameworks coupled with rapid platform action for the removal of malicious deepfakes.
"This collective effort, which combines public awareness with technological defenses and regulatory pressure, is essential to preserving truth and trust in our increasingly synthetic digital landscape," Sood said.
Anyone who thinks they may be targeted by deepfake scams also should, as a general rule, take extra time to verify the authenticity of someone reaching out to engage or meet with them by looking for some form of secondary authentication, Cobb advised.
"This could include calling a known, trusted phone number, messaging the person through a verified social media account, or contacting someone who has a personal affiliation with the individual you're trying to verify," he said. "We need to evolve toward a default mindset of healthy skepticism in these interactions and adopt a 'trust but verify' approach as our standard practice."