Description: Two Houston women reported being targeted in an alleged scam involving AI-generated videos that appeared to depict trusted friends. The purported deepfake avatars were reportedly used via social media and messaging apps to solicit access codes and promote fraudulent sales. Victims are reported to have lost control of their accounts, and friends of the impersonated individuals sent money believing the videos were authentic.
Editor Notes: Timeline note: First public report (that was found) appeared on 09/23/2024; underlying scam likely occurred days or weeks earlier. This incident was added to the database on 05/17/2025.
Entities
View all entitiesAlleged: Unknown deepfake technology developers and Unknown voice cloning technology developers developed an AI system deployed by Unknown scammers, which harmed Stacey Svegliato , Sara Sandlin , General public and General public in Houston.
Alleged implicated AI systems: Unknown deepfake technology , Unknown voice cloning technology and Social media platforms
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
Loading...
HOUSTON - We've been warning you for a while now that scammers can use Artificial Intelligence to create fake videos and con people out of money. But could you spot a deepfake video if it looks just like one of your friends?
Two Houston are…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Loading...

Hackers Break Apple Face ID
· 24 reports
Loading...
Security Robot Rolls Over Child in Mall
· 27 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Loading...

Hackers Break Apple Face ID
· 24 reports
Loading...
Security Robot Rolls Over Child in Mall
· 27 reports
