Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

People seeking medical advice

Incidents Harmed By

Incident 4816 Report
Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

2023-02-12

A purported deepfake video featuring podcast host Joe Rogan reportedly advertising to his listeners about a "libido-boosting" supplement was circulating on TikTok and other platforms before being removed by TikTok along with the account which posted it.

More

Incident 14082 Report
Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

2025-08-12

A Utah woman, Lisa Swearingen, reported paying more than $400 for weight loss supplements after seeing online ads featuring a purported Oprah Winfrey deepfake endorsement. When the product arrived, she said its primary ingredient was turmeric rather than the advertised formula.

More

Incident 6851 Report
The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information

2024-04-24

The WHO's AI-powered health advisor, S.A.R.A.H. (Smart AI Resource Assistant for Health), is alleged to provide inconsistent and inadequate health information. The bot reportedly gives contradictory responses to the same queries, fails to offer specific contact details for healthcare providers, and inadequately handles severe mental health crises, often giving irrelevant or unhelpful advice.

More

Incident 8381 Report
Microsoft Copilot Allegedly Provides Unsafe Medical Advice with High Risk of Severe Harm

2024-04-25

Microsoft Copilot, when asked medical questions, was reportedly found to provide accurate information only 54% of the time, according to European researchers (citation provided in editor's notes). Analysis by the researchers reported that 42% of Copilot's responses could cause moderate to severe harm, with 22% of responses posing a risk of death or severe injury.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

@mikesmithtrainer

Incidents involved as Deployer
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

More
Entity

Unknown deepfake technology developers

Incidents involved as Developer
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

Unknown voice cloning technology developers

Incidents involved as Developer
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

TikTok users

Incidents Harmed By
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

  • Incident 1359
    1 Report

    Reported Deepfake Influencers on TikTok Allegedly Used to Promote Fraudulent Wellness Products

More
Entity

Joe Rogan fans

Incidents Harmed By
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

More
Entity

Joe Rogan

Incidents Harmed By
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

More
Entity

Epistemic integrity

Incidents Harmed By
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

General public

Incidents Harmed By
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

  • Incident 685
    1 Report

    The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information

More
Entity

Unknown deepfake technology

Incidents implicated systems
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

Unknown voice cloning technology

Incidents implicated systems
  • Incident 481
    6 Reports

    Purported Deepfake TikTok Video Reportedly Featured Joe Rogan Endorsing Supplement Brand

  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

WHO

Incidents involved as both Developer and Deployer
  • Incident 685
    1 Report

    The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information

More
Entity

S.A.R.A.H. (Smart AI Resource Assistant for Health)

Incidents involved as Deployer
  • Incident 685
    1 Report

    The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information

More
Entity

Microsoft Copilot

Incidents involved as Deployer
  • Incident 838
    1 Report

    Microsoft Copilot Allegedly Provides Unsafe Medical Advice with High Risk of Severe Harm

More
Entity

Microsoft

Incidents involved as both Developer and Deployer
  • Incident 838
    1 Report

    Microsoft Copilot Allegedly Provides Unsafe Medical Advice with High Risk of Severe Harm

More
Entity

Microsoft Copilot users

Incidents Harmed By
  • Incident 838
    1 Report

    Microsoft Copilot Allegedly Provides Unsafe Medical Advice with High Risk of Severe Harm

More
Entity

Unknown scammers impersonating Elon Musk

Incidents involved as Deployer
  • Incident 1317
    1 Report

    Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims

More
Entity

Unknown scammers

Incidents involved as Deployer
  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

  • Incident 1317
    1 Report

    Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims

More
Entity

social media users

Incidents Harmed By
  • Incident 1317
    1 Report

    Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims

More
Entity

People with diabetes

Incidents Harmed By
  • Incident 1317
    1 Report

    Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims

More
Entity

Elon Musk

Incidents Harmed By
  • Incident 1317
    1 Report

    Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims

More
Entity

Boosie Badazz

Incidents Harmed By
  • Incident 1317
    1 Report

    Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims

More
Entity

Social media platforms

Incidents implicated systems
  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

  • Incident 1317
    1 Report

    Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims

More
Entity

Unknown scammers impersonating Montenegrin public figures

Incidents involved as Deployer
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Unknown scammers impersonating Bosnian public figures

Incidents involved as Deployer
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Nearest Edge

Incidents involved as Deployer
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Limited Charm

Incidents involved as Deployer
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Digital Edge

Incidents involved as Deployer
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Unknown image generator developers

Incidents involved as Developer
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Vladimir Dobričanin

Incidents Harmed By
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Petar Ivanović

Incidents Harmed By
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

General public of Montenegro

Incidents Harmed By
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

General public of Bosnia and Herzegovina

Incidents Harmed By
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Elderly individuals

Incidents Harmed By
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Dražen Živković

Incidents Harmed By
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Unknown image generator technology

Incidents implicated systems
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Instagram

Incidents implicated systems
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

More
Entity

Facebook

Incidents implicated systems
  • Incident 1338
    1 Report

    Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina

  • Incident 1405
    1 Report

    Purported AI-Generated Doctor Deepfakes Reportedly Used Guy's and St Thomas' Branding to Market Weight Loss Patches

More
Entity

Unknown TikTok account operators

Incidents involved as Deployer
  • Incident 1359
    1 Report

    Reported Deepfake Influencers on TikTok Allegedly Used to Promote Fraudulent Wellness Products

More
Entity

Unknown generative AI developers

Incidents involved as Developer
  • Incident 1359
    1 Report

    Reported Deepfake Influencers on TikTok Allegedly Used to Promote Fraudulent Wellness Products

More
Entity

Consumers of wellness and beauty products

Incidents Harmed By
  • Incident 1359
    1 Report

    Reported Deepfake Influencers on TikTok Allegedly Used to Promote Fraudulent Wellness Products

More
Entity

Unknown generative AI systems

Incidents implicated systems
  • Incident 1359
    1 Report

    Reported Deepfake Influencers on TikTok Allegedly Used to Promote Fraudulent Wellness Products

More
Entity

TikTok

Incidents implicated systems
  • Incident 1359
    1 Report

    Reported Deepfake Influencers on TikTok Allegedly Used to Promote Fraudulent Wellness Products

  • Incident 1397
    1 Report

    Deepfakes Reportedly Impersonated David Taylor-Robinson and Other UK Health Experts to Promote Wellness Nest Supplements

More
Entity

Unknown scammers impersonating David Taylor-Robinson

Incidents involved as Deployer
  • Incident 1397
    1 Report

    Deepfakes Reportedly Impersonated David Taylor-Robinson and Other UK Health Experts to Promote Wellness Nest Supplements

More
Entity

David Taylor-Robinson

Incidents Harmed By
  • Incident 1397
    1 Report

    Deepfakes Reportedly Impersonated David Taylor-Robinson and Other UK Health Experts to Promote Wellness Nest Supplements

More
Entity

Women seeking menopause advice

Incidents Harmed By
  • Incident 1397
    1 Report

    Deepfakes Reportedly Impersonated David Taylor-Robinson and Other UK Health Experts to Promote Wellness Nest Supplements

More
Entity

Guy's and St Thomas' NHS Foundation Trust clinicians

Incidents Harmed By
  • Incident 1405
    1 Report

    Purported AI-Generated Doctor Deepfakes Reportedly Used Guy's and St Thomas' Branding to Market Weight Loss Patches

More
Entity

Guy's and St Thomas' NHS Foundation Trust

Incidents Harmed By
  • Incident 1405
    1 Report

    Purported AI-Generated Doctor Deepfakes Reportedly Used Guy's and St Thomas' Branding to Market Weight Loss Patches

More
Entity

General public of the United Kingdom

Incidents Harmed By
  • Incident 1405
    1 Report

    Purported AI-Generated Doctor Deepfakes Reportedly Used Guy's and St Thomas' Branding to Market Weight Loss Patches

More
Entity

Prozenith

Incidents involved as Deployer
  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

Unknown advertisers

Incidents involved as Deployer
  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

Lisa Swearingen

Incidents Harmed By
  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

Oprah Winfrey

Incidents Harmed By
  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

People seeking weight loss supplements

Incidents Harmed By
  • Incident 1408
    2 Reports

    Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements

More
Entity

Perplexity AI

Incidents involved as both Developer and Deployer
  • Incident 1426
    1 Report

    Perplexity AI Reportedly Misstated CLL Research, Allegedly Contributing to Delayed Treatment and Prolonged Suffering

Incidents implicated systems
  • Incident 1426
    1 Report

    Perplexity AI Reportedly Misstated CLL Research, Allegedly Contributing to Delayed Treatment and Prolonged Suffering

More
Entity

Joseph Neal Riley

Incidents Harmed By
  • Incident 1426
    1 Report

    Perplexity AI Reportedly Misstated CLL Research, Allegedly Contributing to Delayed Treatment and Prolonged Suffering

More
Entity

Benjamin Riley

Incidents Harmed By
  • Incident 1426
    1 Report

    Perplexity AI Reportedly Misstated CLL Research, Allegedly Contributing to Delayed Treatment and Prolonged Suffering

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd