Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4487

Associated Incidents

Incident 8952 Report
Alleged Deepfake of New Zealand Endocrinologist Reportedly Promotes Misleading Diabetes Claim

Loading...
Doctor Targeted by Deepfake Ad Scam
medpagetoday.com · 2025

A well-known endocrinologist in New Zealand was recently the victim of a deepfake scam, according to reporting from the New Zealand Herald.

The likeness of Sir Jim Mann, DM, PhD, MA, of the University of Otago, was used in a deepfake news video that circulated on social media after being posted to a Facebook page for a company that sells hemp gummies. In the video, the fake Mann urged people with type 2 diabetes to stop taking the gold-standard medication metformin and instead use alternative natural products.

Though the post has reportedly been removed, Mann told the Herald that it has since occasionally resurfaced and that he only learned of its existence when someone texted him about it.

"I was just bombarded after that with texts, emails from two groups of people ... One [group was] saying, 'Congratulations on this wonderful new product ... that you've discovered and great that you've exposed all these medics and other people as being frauds ... where can I get the product?' And other people who said, 'For goodness sake, be aware, you've been scammed,'" he told the outlet.

"There were some really very reasonable, intelligent people who had completely taken it, including people that I know well ... Goodness knows how many people have been taken in," he added. "AI is so clever that they could manipulate my mouth to be looking as if I was saying those words."

Mann also urged caution to those who come across videos from experts that appear too good to be true.

"If you hear somebody saying something that sounds too good to be true, it probably is too good to be true. ... They were pretty outrageous claims," he told the Herald.

Indeed, the deepfake scam using Mann's likeness is not an isolated event.

In November, Los Angeles-based podiatrist and social media personality Dana Brems, DPM, said in an Instagram post that a company used AI to make a fake recording of her voice.

The post showed Brems reacting, mouth covered in dismay, to what she said was an advertisement that "used an AI clone of my voice to pretend I recommended their product."

Social media posts from Brems about the ad -- which appeared to be for an ear-cleaning device -- racked up views, with many commenters pointing to the potential harms of fake health-related recommendations tied to medical professionals.

"Once people catch on that they can use AI to impersonate doctors [and] other authority figures, it's going to be a huge problem," Brems told MedPage Today at the time.

Mann agreed with this sentiment in statements made to the Herald.

"It makes me feel terrible because I'm patron of Diabetes New Zealand, so a lot of people are aware of my name, even if they don't know me," he said.

He advised people to source reliable information directly from recognized healthcare professionals_._

Mann did not immediately respond to request for further comment from MedPage Today.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd