Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 147: Reported AI-Cloned Voice Used to Deceive Hong Kong Bank Manager in Purported $35 Million Fraud Scheme

Description: In January 2020, a Hong Kong-based bank manager for a Japanese company reportedly authorized $35 million in transfers after receiving a call from someone whose voice matched the company director's. According to Emirati investigators, scammers used AI-based voice cloning to impersonate the executive. The fraud allegedly involved at least 17 individuals and reportedly led to global fund transfers that triggered a UAE investigation. U.S. authorities were reportedly later asked to help trace part of the funds sent to U.S. banks.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown voice cloning technology developer and Unknown deepfake technology developer developed an AI system deployed by Unknown transnational fraud ring , Unknown scammers , Unknown fraudsters and Unknown cybercriminals, which harmed Unnamed Japanese firm , Unnamed Hong Kong-based branch manager of unnamed Japanese firm , General public of the United Arab Emirates and Centennial Bank.
Alleged implicated AI systems: Unknown voice cloning technology , Unknown real-time speech synthesis technology and Unknown deepfake technology

Incident Stats

Incident ID
147
Report Count
6
Incident Date
2020-01-15
Editors
Khoa Lam, Sean McGregor, Daniel Atherton
Applied Taxonomies
CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

147

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

no

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2020

Estimated Date

“Yes” if the data was estimated. “No” otherwise.
 

No

Multiple AI Interaction

“Yes” if two or more independently operating AI systems were involved. “No” otherwise.
 

no

Embedded

“Yes” if the AI is embedded in a physical system. “No” if it is not. “Maybe” if it is unclear.
 

no

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.3. Fraud, scams, and targeted manipulation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident Occurrence+4
Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find
The deepfake technology behind a $35 million bank heist in Hong Kong
Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find

Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find

forbes.com

Deepfaked Voice Enabled $35 Million Bank Heist in 2020

Deepfaked Voice Enabled $35 Million Bank Heist in 2020

unite.ai

Deep Voice Facilitated $35 Million Bank Heist in 2020

Deep Voice Facilitated $35 Million Bank Heist in 2020

unite.ai

Bank Robbers Steal $35 Million by Deepfaking Boss's Voice

Bank Robbers Steal $35 Million by Deepfaking Boss's Voice

futurism.com

Deepfake Audio Nabs $35M in Corporate Heist

Deepfake Audio Nabs $35M in Corporate Heist

darkreading.com

The deepfake technology behind a $35 million bank heist in Hong Kong

The deepfake technology behind a $35 million bank heist in Hong Kong

proprivacy.com

Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find
forbes.com · 2021

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized---the director of his parent business. The director had good news: the company was about to make an acquisition, so he n…

Deepfaked Voice Enabled $35 Million Bank Heist in 2020
unite.ai · 2021

An investigation into the defrauding of $35 million USD from a bank in the United Arab Emirates in January of 2020 has found that deepfake voice technology was used to imitate a company director known to a bank branch manager, who then auth…

Deep Voice Facilitated $35 Million Bank Heist in 2020
unite.ai · 2021
AI Translated

An investigation into the $35 million stolen from a bank in the United Arab Emirates in January 2020 found that fake voice technology was used to impersonate a company director known to a bank branch manager, who then authorized the transac…

Bank Robbers Steal $35 Million by Deepfaking Boss's Voice
futurism.com · 2021

Deep Voice

Deepfake technology has gotten incredibly sophisticated over the years, allowing us to resurrect the voices of the dead and fool Justin Bieber into challenging famous Scientologists to fights. Some folks are even using it for mor…

Deepfake Audio Nabs $35M in Corporate Heist
darkreading.com · 2021

A group of fraudsters made off with $35 million after using forged email messages and deepfake audio to convince an employee of a United Arab Emirates company that a director requested the money as part of an acquisition of another organiza…

The deepfake technology behind a $35 million bank heist in Hong Kong
proprivacy.com · 2022

UAE police are investigating a bank robbery in which criminals allegedly used deepfake artificial intelligence (AI) to clone the company director's voice and convince the bank manager to transfer $35 million.

As deepfake technology evolves …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Security Robot Drowns Itself in a Fountain

Security Robot Drowns Itself in a Fountain

Jul 2017 · 30 reports
Fraudsters Used AI to Mimic Voice of a UK-Based Firm's CEO's Boss

Fraudsters Used AI to Mimic Voice of a UK-Based Firm's CEO's Boss

Mar 2019 · 1 report
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Security Robot Drowns Itself in a Fountain

Security Robot Drowns Itself in a Fountain

Jul 2017 · 30 reports
Fraudsters Used AI to Mimic Voice of a UK-Based Firm's CEO's Boss

Fraudsters Used AI to Mimic Voice of a UK-Based Firm's CEO's Boss

Mar 2019 · 1 report
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5