Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Optometrist Fights Back After Deepfake Scam

Incident 1455: Purported Deepfake Videos Allegedly Impersonated Optometrist Joseph Allen to Promote Myopia-Reversal Eyedrops on TikTok

“Optometrist Fights Back After Deepfake Scam”Latest Incident Report
visionmonday.com2026-04-18

Joseph Allen, OD is well known on the conference lecture circuit, on social media, and for his YouTube channel, Dr. Eye Health, where he educates followers about a variety of eye health topics. His notoriety had a downside, though, when he discovered videos across social media that looked and sounded like him, but were spreading false information and selling bogus products. Dr. Allen was the victim of AI Deepfake scamming.

AI-generated deepfake and slop videos are increasingly targeting real-world doctors, impersonating them to create low-effort, algorithm-driven content that can spread dangerous health misinformation. Deepfakes are AI-manipulated or fabricated content that impersonates a real person for the purposes of spreading malicious or false information.

AI slop videos are AI-produced video content at scale that exploits social media's engagement algorithms and floods platforms like YouTube, TikTok, or Instagram, edging out original content creators. Dr. Allen noted that, "In some of these, they're telling you not to see your eye care provider and that our entire profession is a scam."

With legal counsel, Dr. Allen challenged the deepfakes targeting his channel but it took a month to get hold of TikTok where he discovered the fake, then it took another three weeks to prove he was the legitimate 'Dr. Eye Health.' An account was deplatformed, but within two or three hours, the scammers made a new account and reuploaded all the previous content. In Dr. Allen's case, the product being sold in the scam was eyedrops to reverse myopia. The drops were tracked back to a Chinese company selling millions of dollars in product on Amazon. The content of the drops remains in question.

Dr. Allen recommends looking for tell-tale signs for AI-generated videos, keeping in mind that as AI progresses, these signs may become more subtle, or even corrected. Look closely at the video content for odd editing, such as, do the subject's lips match up with the vocals and audio, or do they blink too much or too little? If it's selling a too-good-to-be-true product, then it probably is, and make sure to look and think carefully when it comes to such content.

Does it feel natural or not? Deepfakes still have difficulty portraying natural lighting and can make the subject feel unnatural due to a mismatch in lighting conditions or reflections. More signs are available from MIT researchers including clues from eyes and eyebrows, blinking, and glare on eyeglasses. As AI blurs the lines between reality and fiction, the responsibility falls on individual critical thinking about media content.

Read More
Loading...
Misuse of the name of the Albanian cardiologist and journalist from Albania to sell hypertension drugs

Incident 1452: Purported AI-Generated Impersonations of Albanian Cardiologist Spiro Qirko and Journalist Ilir Topi Were Reportedly Used on Facebook to Promote Hypertension Product in Kosovo

“Misuse of the name of the Albanian cardiologist and journalist from Albania to sell hypertension drugs”
kallxo.com2026-04-18

Misuse of the name of the Albanian cardiologist and journalist from Albania to sell hypertension drugs = heavy and manipulated video. The drug 'Hyper Caps' does not have a Marketing Authorization from the Kosovo Agency for Medical Products and Devices (AKPPM), which makes its marketing illegal in Kosovo.

Article

Facebook Page[ Dr. Spiro Search](https://fit.my-health.blog/M24T3jcX?media_type=video&campaign_id=120245649803330357&adset_id=120245649803350357&ad_id=120245649803340357&placement=Facebook_Mob ile_Feed&a=fy-BRI-bloomfield-B&x=961459690129113&cn=al2294truba&cel1=2294&fbclid=IwdGRzaAQrbrBleHRuA2FlbQEwAGFkaWQBqzLIRp2UVXNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHi27JJ8 ZYZmVh3hPAv_ZnPN9a3GlXpwYultUkUdZFTpoQJ1vw-Vc2tjBGJ0m_aem_eBCvBlVEfeAdsh1Ekfq1SA&utm_medium=paid&utm_source=fb&utm_id=120245649803330357&utm_content=12024564980334035 7. has published a video in which he claims to have discovered a cure for hypertension.

In the video, among other things, it is claimed that this drug is being fought by pharmaceutical companies. 

"As a specialist, people often ask me: "Is it possible to completely cure hypertension?". The answer -- YES. 🩺 I have created a method that within a few days eliminates pressure and load on the veins, without expensive procedures. The secret lies in the proper stimulation of the veins and the cardiovascular system. In the video you will see exactly how my formula works and you will forget about hypertension forever. You will feel the result immediately." - writes in this publication.

While a video has also been published in the form of an interview introducing this medicine.

The interview is claimed to be conducted by journalist Ilir Topi (the interviewer) with doctor Spiro Qirko, who explains how hypertension is cured.

'Hyper Caps' is the product that is claimed to have been discovered by a cardiologist and that it cures hypertension.

Krypometer has also searched the list of medicines that have marketing authorization in Kosovo to see if the same is allowed to be sold and traded in Kosovo.

After searching the website of the Kosovo Agency for Medicinal Products and Devices (AKPPM), it turns out that the product 'Hyper Caps' is not on the list of medicines with Marketing Authorization.

Article 16 of the Law on Medicinal Products and Devices states that "the medicinal product may be placed in the Republic of Kosovo only after receiving Marketing Authorization from the KPAPM".

Krypometer has tested this[ video](https://fit.my-health.blog/M24T3jcX?media_type=video&campaign_id=120245649803330357&adset_id=120245649803350357&ad_id=120245649803340357&placement=Facebook_Mobile_Feed&a=fy-BRI-bloomfield-B&x=961459690129113&cn=al2294truba&cel1=2294&fbclid=IwdGRzaAQrbrBleHRuA2FlbQEwAGFkaWQBqzLIRp2UVXNydGMGYXBwX2lkDDM 1MDY4NTUzMTcyOAABHi27JJ8ZYZmVh3hPAv_ZnPN9a3GlXpwYultUkUdZFTpoQJ1vw-Vc2tjBGJ0m_aem_eBCvBlVEfeAdsh1Ekfq1SA&utm_medium=paid&utm_source=fb&utm_id=120245649803330 357. platform that detects content created with Artificial Intelligence and its results show that the video has a 96.4% probability of being generated with AI, as well as the speech (archive).

Krypometer has contacted journalist Ilir Topi and Albanian cardiologist, Spiro Qirko, who have denied having anything to do with this profile or the content of this video.

"It is generated with AI, fake." - said Topi.

"I feel very insulted for this very ugly fraud that is speculating in the most dirty and unprecedented way with my name and personality. It is extremely criminal." - said Qirko, a cardiologist from Albania.

Qirko said that he has never recommended herbal medicines to treat hypertension.

Hypertension

Cardiologist Qirko said that according to the guidelines of the European Society of Cardiology, nowadays blood pressure is considered normal up to 135/85 mm/Hg.

"Beyond these figures, it is considered high blood pressure and must be treated regularly. Treatment usually consists of using one or a combination of two medications of different classes according to the doctor's selection, following the progress of the patient's blood pressure." - said Qirko.

Read More
Loading...
Is Nirmala Sitharaman Promoting a INR 22,000 Investment for INR 5.5 Lakh Returns? PIB Fact Check Reveals Truth Behind Viral Video

Incident 1454: Purported Deepfake Video Reportedly Portrayed Nirmala Sitharaman Endorsing Investment Scheme

“Is Nirmala Sitharaman Promoting a INR 22,000 Investment for INR 5.5 Lakh Returns? PIB Fact Check Reveals Truth Behind Viral Video”
latestly.com2026-04-18

New Delhi, April 16: The government's fact-checking arm PIB Fact Check on Thursday debunked a viral video that falsely shows Finance Minister Nirmala Sitharaman endorsing a high-return investment scheme, warning citizens against falling for such misinformation. In a post on social media platform X, PIB Fact Check said the video, which claims that an investment of Rs 22,000 could yield returns of Rs 5.5 lakh within a week, is fake and generated using artificial intelligence. "A viral video of Finance Minister Nirmala Sitharaman claims to offer Rs 5,50,000 in a week in lieu of an initial investment amount of Rs 22,000. The video is fake and AI generated," it said. Did Nirmala Sitharaman Promote a High-Return Investment Scheme Online? PIB Fact Check Debunks AI Video.

The agency clarified that neither the Government of India nor the Finance Minister is promoting or endorsing any such scheme, and urged people to remain cautious of misleading content circulating online. "Neither the Government nor the Finance Minister is endorsing any such investment scheme," it added. The fact-checking unit also advised citizens to verify such claims through official government sources before taking any financial decisions. It further appealed to users to report and flag suspicious content to help curb the spread of misinformation.

"Stay vigilant. Verify such claims through official government sources only," PIB Fact Check said. Meanwhile, last month, PIB flagged a fake message circulating on social media that claims State Bank of India's (SBI) YONO app accounts will be blocked if users do not update their Aadhaar details immediately. The purported message instructs recipients to download an APK file titled "SBI KYC AADHAR UPDATE" and install it to avoid disruption of services. "The warning, which appears to be a phishing attempt, is designed to mislead account holders into sharing sensitive banking and personal information," PIB said last month.

TruLY Score 5 -- Trustworthy | On a Trust Scale of 0-5 this article has scored 5 on LatestLY. It is verified through official sources (Official X Account of PIB Fact Check). The information is thoroughly cross-checked and confirmed. You can confidently share this article with your friends and family, knowing it is trustworthy and reliable.

Read More
Loading...
Florida Man Allegedly Used AI-Generated Deepfake Video to Falsely Report Break-In of Deputy’s Patrol Vehicle in Lake Mary

Incident 1450: Florida Man Allegedly Used Purported Deepfake Video to Report Break-In of Deputy's Patrol Vehicle in Lake Mary

“Florida Man Allegedly Used AI-Generated Deepfake Video to Falsely Report Break-In of Deputy’s Patrol Vehicle in Lake Mary”
clickorlando.com2026-04-11

SEMINOLE COUNTY, Fla. -- A South Florida man was arrested after showing an A.I.-generated video to a deputy at a store in Lake Mary last month, according to the Seminole County Sheriff's Office.

In a release, the SCSO said that the deputy was inside Academy Sports along Lake Emma Road, which is when the man --- identified as Alexis Martínez-Arizala, 25 --- approached him.

"(Martínez-Arizala) claimed he had observed multiple people entering the deputy's marked patrol vehicle in the parking lot and presented a video on his cell phone as evidence," the release reads. "The video, approximately three seconds in length, appeared to show two individuals entering the patrol vehicle parked near the store."

But when the deputy checked his patrol car, he found that nothing had been disturbed or stolen, investigators noted.

Instead, store surveillance footage showed that no one had actually approached the patrol car during that timeframe, and deputies later concluded that the cell phone video had been fabricated.

Thus, a warrant was issued for Martínez-Arizala's arrest, and he was taken into custody on Wednesday after being located in San Juan, Puerto Rico.

"Investigators also learned Martínez-Arizala posted content related to the encounter on his social media accounts in an apparent attempt to gain attention and create viral content," the release continues.

Now, Martínez-Arizala faces charges of fabricating physical evidence, making a false report to law enforcement, unlawful use of a two-way communication device, and knowingly giving false information to a law enforcement officer concerning the alleged commission of a crime.

He is set to be extradited to Seminole County, where he'll be held on $7,000 bond.

"The misuse of artificial intelligence to create deepfake videos is a growing concern, particularly when it targets public safety professionals," Sheriff Dennis Lemma said. "These fabricated videos can damage reputations, create unnecessary tensions, and raise real safety concerns for the first responders who serve our communities. As this technology becomes more accessible, we take these types of crimes seriously and will take action to protect those who are targeted in our community, including both private citizens and the public safety professionals who work every day to keep our residents safe."

Read More
Loading...
US appeals court fines lawyers $30,000 in latest AI-related sanction

Incident 1447: Sixth Circuit Sanctioned Lawyers in Whiting v. City of Athens over Alleged Fake Appellate Citations in Briefs Reportedly Bearing Hallmarks of Hallucinations

“US appeals court fines lawyers $30,000 in latest AI-related sanction”
reuters.com2026-04-11

March 16 (Reuters) - An appeal containing fake case citations that misrepresent the law can be dismissed as frivolous, a U.S. federal appeals court panel said in a decision sanctioning ​two attorneys who submitted filings that bore hallmarks of artificial intelligence "hallucinations."

The Cincinnati-based 6th U.S. Circuit Court of Appeals ‌said in its order, opens new tab on Friday that attorneys Van Irion and Russ Egli "sullied the reputation of our bar, which now must litigate under the cloud of their conduct."

The court said it found more than two dozen fake citations and misrepresentations of fact in the appeal, which involved an incident at a fireworks ​show hosted by the city of Athens, Tennessee.

The appeals court in a prior order in the case asked the attorneys ​how they vetted their briefs for accuracy and whether they used generative AI to write the filings. ⁠The attorneys did not answer the court's questions about AI, and instead challenged the lawfulness of the order.

The two attorneys must reimburse ​Athens for its legal work on the appeal, and also must individually pay $15,000 each to the appeals court as a punitive sanction, ​according to the order.

Egli and Irion in a statement on Tuesday said they "categorically" deny the court's allegations of citing fake cases, and also contend they were denied a meaningful chance to respond to the panel's questions.

"We are pursuing all available legal remedies to challenge this procedurally deficient order and defend the ​integrity of the judicial process," the lawyers' statement said. Irion told Reuters that "the Circuit Court is ignoring its own rules, and clerks ​are signing substantive orders without authorization."

Athens Mayor Larry Eaton in a statement on Monday said the appeals court in a related order upheld the dismissal ‌of several ⁠lawsuits against the city over the 2022 fireworks event. Eaton called the decision "reassuring."

The sanctions decision comes as more courts grapple with fake case citations and other errors attributable to generative artificial intelligence platforms, which sometimes fabricate information. Lawyers are not prohibited from using AI tools but are bound to safeguard the accuracy of their submissions, and dozens of attorneys have been sanctioned in recent years for submitting AI-generated ​material that they failed to vet.

Irion ​and Egli had contested, opens new tab the appeals ⁠court's demand for details about how they prepared their filings partly on the grounds that doing so would violate protections for attorneys' work-product and communications with clients.

The 6th Circuit panel, Circuit Judges John ​Bush, Jane Branstetter Stranch and Eric Murphy, said "whether and how the briefs were cite-checked does not ​implicate conversations regarding ⁠legal advice."

"Most litigants caught submitting fake cases have apologized and sought forgiveness, rightly recognizing the seriousness of their misconduct," Bush wrote for the panel.

The judges said by contrast "Irion and Egli scolded this court and accused it of engaging in a vast conspiracy to harass them."

The case is ⁠Whiting v. ​City of Athens, 6th U.S. Circuit Court of Appeals, No. 25-5424.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – November and December 2025 and January 2026

By Daniel Atherton

2026-02-02

Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    768
  • 🥈

    Anonymous

    157
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    872
  • 🥈

    Anonymous

    243
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    2997
  • 🥈

    Anonymous

    934
  • 🥉

    Khoa Lam

    417
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2026 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 70bfe3d