Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1405: Purported AI-Generated Doctor Deepfakes Reportedly Used Guy's and St Thomas' Branding to Market Weight Loss Patches

Description: Guy's and St Thomas' NHS Foundation Trust warned that purported AI-generated videos circulated on Facebook and TikTok depicted its clinicians endorsing weight loss patches. The videos allegedly impersonated doctors and used misleading medical claims to market a product.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown voice cloning technology developers and Unknown deepfake technology developers developed an AI system deployed by Unknown scammers, which harmed People seeking medical advice , Guy's and St Thomas' NHS Foundation Trust clinicians , Guy's and St Thomas' NHS Foundation Trust , General public of the United Kingdom , General public and Epistemic integrity.
Alleged implicated AI systems: Unknown voice cloning technology , Unknown deepfake technology , TikTok , Social media platforms and Facebook

Incident Stats

Incident ID
1405
Report Count
1
Incident Date
2026-01-09
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceHospital alert after fake doctor-endorsed videos
Loading...
Hospital alert after fake doctor-endorsed videos

Hospital alert after fake doctor-endorsed videos

bbc.co.uk

Loading...
Hospital alert after fake doctor-endorsed videos
bbc.co.uk · 2026

A hospital trust in south London has issued an alert after fraudulent videos were circulated online claiming its staff endorsed weight loss products.

Guy's and St Thomas' NHS Foundation Trust said that the videos, found on social media plat…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

Purported AI-Generated Deepfake Videos Reportedly Used in Swedish Scam Campaign Impersonating Doctors Agnes Wold and Anders Tegnell

Jun 2025 · 2 reports

Purported Unauthorized Deepfakes of Norman Swan and Others Circulated in Online Supplement Campaigns

May 2025 · 2 reports

Purported Deepfake Featuring Dr. Rinki Murphy and Jack Tame Reportedly Used to Promote Diabetes Scam in New Zealand

Apr 2025 · 2 reports
Loading...
Alleged Deepfake of New Zealand Endocrinologist Reportedly Promotes Misleading Diabetes Claim

Alleged Deepfake of New Zealand Endocrinologist Reportedly Promotes Misleading Diabetes Claim

Jan 2025 · 2 reports
Loading...
Scammers Reportedly Using Deepfakes of Health Experts and Public Figures in Australia to Sell Health Supplements and Give Harmful Advice

Scammers Reportedly Using Deepfakes of Health Experts and Public Figures in Australia to Sell Health Supplements and Give Harmful Advice

Dec 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
UK passport photo checker shows bias against dark-skinned women

UK passport photo checker shows bias against dark-skinned women

Oct 2020 · 1 report
Loading...
Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports
Loading...
YouTube's Algorithms Failed to Remove Violating Content Related to Suicide and Self-Harm

YouTube's Algorithms Failed to Remove Violating Content Related to Suicide and Self-Harm

Feb 2019 · 3 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Purported AI-Generated Deepfake Videos Reportedly Used in Swedish Scam Campaign Impersonating Doctors Agnes Wold and Anders Tegnell

Jun 2025 · 2 reports

Purported Unauthorized Deepfakes of Norman Swan and Others Circulated in Online Supplement Campaigns

May 2025 · 2 reports

Purported Deepfake Featuring Dr. Rinki Murphy and Jack Tame Reportedly Used to Promote Diabetes Scam in New Zealand

Apr 2025 · 2 reports
Loading...
Alleged Deepfake of New Zealand Endocrinologist Reportedly Promotes Misleading Diabetes Claim

Alleged Deepfake of New Zealand Endocrinologist Reportedly Promotes Misleading Diabetes Claim

Jan 2025 · 2 reports
Loading...
Scammers Reportedly Using Deepfakes of Health Experts and Public Figures in Australia to Sell Health Supplements and Give Harmful Advice

Scammers Reportedly Using Deepfakes of Health Experts and Public Figures in Australia to Sell Health Supplements and Give Harmful Advice

Dec 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
UK passport photo checker shows bias against dark-skinned women

UK passport photo checker shows bias against dark-skinned women

Oct 2020 · 1 report
Loading...
Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports
Loading...
YouTube's Algorithms Failed to Remove Violating Content Related to Suicide and Self-Harm

YouTube's Algorithms Failed to Remove Violating Content Related to Suicide and Self-Harm

Feb 2019 · 3 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd