Description: Cryptocurrency analyst Mai Fujimoto reported losing access to her X, Telegram, and MetaMask accounts through a video call with a purported deepfake impersonating a trusted contact. According to Fujimoto, the attacker, who appeared on Zoom as her acquaintance, instructed her to install software under the guise of resolving an audio issue. Fujimoto believes this enabled malware installation. In response, Binance founder Changpeng Zhao warned such attacks render video call verification unreliable.
Entities
View all entitiesAlleged: Unknown deepfake technology developers and Unknown voice cloning technology developers developed an AI system deployed by Unknown scammers and Unknown scammer impersonating acquaintance of Mai Fujimoto, which harmed Unnamed acquaintance of Mai Fujimoto , Mai Fujimoto and General public.
Alleged implicated AI systems: Zoom , X (Twitter) , Unknown voice cloning technology , Unknown deepfake technology , Telegram and MetaMask
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
Binance founder Changpeng Zhao has cautioned that AI-powered deepfake technology has made video call verification unreliable for security purposes. He also warned users to avoid installing software from unofficial sources, even if the reque…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents


