Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1242: Purported AI-Generated Deepfake Videos Reportedly Used in Swedish Scam Campaign Impersonating Doctors Agnes Wold and Anders Tegnell

Responded
Description: A June 2025 Swedish police warning reported that fraudsters used purportedly AI-generated deepfake videos of well-known doctors, including Anders Tegnell and Agnes Wold, in social media ads promoting fake health products. Victims were allegedly lured to fraudulent sites where card details were stolen and large unauthorized charges were made. The scams, which reportedly began in late May 2025, prompted multiple complaints from both impersonated physicians and defrauded consumers.
Editor Notes: Timeline notes: The alleged scams began circulating sometime in late May 2025 with the Swedish police issuing a warning on 06/09/2025, which is the date taken for this incident ID. The ID was added to the AI Incident Database on 10/21/2025.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown deepfake technology developer and Unknown voice cloning technology developer developed an AI system deployed by Unknown scammers , Unknown fraudsters , Unknown scammers impersonating Agnes Wold and Unknown scammers impersonating Anders Tegnell, which harmed Agnes Wold , Anders Tegnell , General public , General public of Sweden and Consumers misled by fraudulent ads.
Alleged implicated AI systems: Unknown deepfake technology , Unknown voice cloning technology , Meta Platforms , Facebook and Social media

Incident Stats

Incident ID
1242
Report Count
2
Incident Date
2025-06-09
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Flera anmälningar om AI-bedrägerier i sociala medier - Response
Bedragare utnyttjar kända läkare för att lura svenskar
Loading...
Flera anmälningar om AI-bedrägerier i sociala medier

Flera anmälningar om AI-bedrägerier i sociala medier

polisen.se

Loading...
Bedragare utnyttjar kända läkare för att lura svenskar

Bedragare utnyttjar kända läkare för att lura svenskar

dn.se

Loading...
Flera anmälningar om AI-bedrägerier i sociala medier
polisen.se · 2025
Polisen Stockholms län, Stockholm County Police post-incident response

Flera anmälningar om AI-bedrägerier i sociala medier

Bedragare använder AI för att efterlikna verkliga människor i videor på sociala medier. I nästa steg stjäls användarnas kortuppgifter.

Sedan slutet av maj har polisen i region Stockholm f…

Loading...
Bedragare utnyttjar kända läkare för att lura svenskar
dn.se · 2025

Det var i maj som mejlen började trilla in, både till henne själv och till SR-programmet ”Fråga Agnes Wold”, där läkaren och professorn dissekerar hälsomyter.

”Är det sant det du säger om vaskulär rengöring? Kan jag lägga av med mina medici…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
YouTube’s Recommendation Algorithm Allegedly Promoted Climate Misinformation Content

YouTube’s Recommendation Algorithm Allegedly Promoted Climate Misinformation Content

Feb 2019 · 2 reports
Loading...
Thoughts App Allegedly Created Toxic Tweets

Thoughts App Allegedly Created Toxic Tweets

Jul 2020 · 1 report
Loading...
Bad AI-Written Christmas Carols

Bad AI-Written Christmas Carols

Dec 2017 · 1 report
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
YouTube’s Recommendation Algorithm Allegedly Promoted Climate Misinformation Content

YouTube’s Recommendation Algorithm Allegedly Promoted Climate Misinformation Content

Feb 2019 · 2 reports
Loading...
Thoughts App Allegedly Created Toxic Tweets

Thoughts App Allegedly Created Toxic Tweets

Jul 2020 · 1 report
Loading...
Bad AI-Written Christmas Carols

Bad AI-Written Christmas Carols

Dec 2017 · 1 report

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65