Incident 147: Hong Kong Bank Manager Swindled by Fraudsters Using Deepfaked Voice of Company Director

Description: In early 2020, fraudsters reportedly allegedly deepfaked the voice of a company's director, demanding a bank manager in Hong Kong to authorize a $35M transfer.
Alleged: Unknown developed an AI system deployed by scammers, which harmed Hong Kong bank manager.

Suggested citation format

Lam, Khoa. (2020-01-01) Incident Number 147. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
147
Report Count
2
Incident Date
2020-01-01
Editors
Sean McGregor, Khoa Lam

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

AI voice cloning is used in a huge heist being investigated by Dubai investigators, amidst warnings about cybercriminal use of the new technology.

In early 2020, a bank manager in the Hong Kong received a call from a man whose voice he recognized—a director at a company with whom he’d spoken before. The director had good news: His company was about to make an acquisition, so he needed the bank to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the bank manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The bank manager, believing everything appeared legitimate, began making the transfers.

What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech, according to a court document unearthed by Forbes in which the U.A.E. has sought American investigators’ help in tracing $400,000 of stolen funds that went into U.S.-based accounts held by Centennial Bank. The U.A.E., which is investigating the heist as it affected entities within the country, believes it was an elaborate scheme, involving at least 17 individuals, which sent the pilfered money to bank accounts across the globe.

Little more detail was given in the document, with none of the victims’ names provided. The Dubai Public Prosecution Office, which is leading the investigation, hadn’t responded to requests for comment at the time of publication. Martin Zelner, a U.S.-based lawyer, had also been contacted for comment, but had not responded at the time of publication.

It’s only the second known case of fraudsters allegedly using voice-shaping tools to carry out a heist, but appears to have been far more successful than the first, in which fraudsters used the tech to impersonate a CEO of a U.K.-based energy firm in an attempt to steal $240,000 in 2019, according to the Wall Street Journal.

The U.A.E. case shows how devastating such high-tech swindles can be and lands amidst warnings about the use of AI to create so-called deep fake images and voices in cybercrime.

“Audio and visual deep fakes represent the fascinating development of 21st century technology yet they are also potentially incredibly dangerous posing a huge threat to data, money and businesses,” says Jake Moore, a former police officer with the Dorset Police Department in the U.K. and now a cybersecurity expert at security company ESET. “We are currently on the cusp of malicious actors shifting expertise and resources into using the latest technology to manipulate people who are innocently unaware of the realms of deep fake technology and even their existence.

“Manipulating audio, which is easier to orchestrate than making deep fake videos, is only going to increase in volume and without the education and awareness of this new type of attack vector, along with better authentication methods, more businesses are likely to fall victim to very convincing conversations.”

Once a technology confined to the realm of fictional capers like Mission: Impossible, voice cloning is now widely available. Various tech startups are working on increasingly sophisticated AI voice technologies, from London’s Aflorithmic to Ukraine’s Respeecher and Canada’s Resemble.AI. The technology caused a stir in recent months with the revelation that the late Anthony Bourdain had his voice synthesized for a documentary on his life. Meanwhile, recognizing the potential for malicious use of the AI, a handful of companies, such as $900 million-valued security firm Pindrop, now claim they can detect synthesized voices and thereby prevent frauds.

If recordings of you speaking are available online, whether on social media, YouTube or on an employer’s website, there may well be a secret battle going on for control of your voice without you knowing.

UPDATE: After publication, the U.A.E. Ministry of Foreign Affairs & International Cooperation contacted Forbes to note that the affected bank was in Hong Kong, not within the U.A.E., though the Dubai investigators were leading the probe. The article was updated on October 22 2022 to reflect that.

In a statement, HE Hamid Al Zaabi, director general of the U.A.E. Executive Office of Anti-Money Laundering and Counter Terrorism Financing, added: “Even with incidents happening outside the U.A.E., we will work closely with law enforcement partners around the world to identify and detect those individuals who knowingly engage in deceptive practices such as imposter fraud. The U.A.E. will then pursue to the fullest extent of the law these individuals, ensuring they are held accountable and brought to justice quickly.”

Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find

Deep Voice

Deepfake technology has gotten incredibly sophisticated over the years, allowing us to resurrect the voices of the dead and fool Justin Bieber into challenging famous Scientologists to fights. Some folks are even using it for more criminal purposes.

Such was the case for a United Arab Emirates bank that was fooled by criminals using AI voice cloning to steal $35 million. Court documents recently uncovered by Forbes revealed that fraudsters used a deepfaked voice of a company executive to fool a bank manager to transfer the millions to their possession in early 2020.

The supposed executive said his company was about to make an acquisition and needed the money to do so. The bank manager recognized the executive’s voice from having worked with him before. So, believing everything was copacetic, he authorized the transfer — straight into the accounts of the criminals.

Wanted: AI Bank Robbers

The UAE has now requested the help of American investigators to trace roughly $400,000 that went into US-based bank accounts, according to the documents. Dubai investigators believe that the operation involved at least 17 people using “deep voice” tech and sent money to banks across the world.

While surprising, this is actually the second known case of criminals using a deepfaked voice to try and rob a bank. In 2019, criminals used AI to try and impersonate an executive’s voice to steal $243,000, according to The Wall Street Journal.

Rest assured, as the technology behind deepfakes become more and more refined, we’ll see plenty more attempts at fraud emerge in the future.

Bank Robbers Steal $35 Million by Deepfaking Boss's Voice

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents