概要: 2020年1月、香港在住の日本企業の銀行支店長が、その会社の取締役と声を同じくする人物からの電話を受け、3,500万ドルの送金を承認したと報じられました。アラブ首長国連邦の捜査当局によると、詐欺師たちはAIによる音声クローン技術を用いて取締役になりすましたとのことです。この詐欺には少なくとも17人が関与し、国際的な資金移動につながったとされ、UAEによる捜査が開始されました。その後、米国当局は米国の銀行に送金された資金の一部を追跡するため、協力を要請されたと報じられています。
Alleged: Unknown voice cloning technology developer と Unknown deepfake technology developer developed an AI system deployed by Unknown transnational fraud ring , Unknown scammers , Unknown fraudsters と Unknown cybercriminals, which harmed Unnamed Japanese firm , Unnamed Hong Kong-based branch manager of unnamed Japanese firm , General public of the United Arab Emirates と Centennial Bank.
CSETv1 分類法のクラス
分類法の詳細Incident Number
The number of the incident in the AI Incident Database.
147
Special Interest Intangible Harm
An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
no
Date of Incident Year
The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank.
Enter in the format of YYYY
2020
Estimated Date
“Yes” if the data was estimated. “No” otherwise.
No
Multiple AI Interaction
“Yes” if two or more independently operating AI systems were involved. “No” otherwise.
no
Embedded
“Yes” if the AI is embedded in a physical system. “No” if it is not. “Maybe” if it is unclear.
no
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional