概要: 詐欺師(複数名を含む)が、AI生成のディープフェイク動画や文書を用いて「アラ・モーガン」という架空の人物になりすまし、77歳の女性ニッキ・マクロードを様々な支払い方法で1万7000ポンド(約200万円)を送金するよう仕向けたと報じられています。これらのディープフェイクは、オンライン恋愛関係を装って詐欺を行うために、信用を高めるために利用されたとされています。
Editor Notes: Timeline notes: The fraud was reported to Police Scotland sometime in October 2024. The date of the incident, 12/19/2024, is when the story became public.
Alleged: Unknown deepfake technology developer developed an AI system deployed by scammers と Alla Morgan, which harmed Nikki MacLeod.
関与が疑われるAIシステム: Unknown deepfake tool
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional