Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Florida Man Allegedly Used AI-Generated Deepfake Video to Falsely Report Break-In of Deputy’s Patrol Vehicle in Lake Mary

Incident 1450: Florida Man Allegedly Used Purported Deepfake Video to Report Break-In of Deputy's Patrol Vehicle in Lake Mary

“Florida Man Allegedly Used AI-Generated Deepfake Video to Falsely Report Break-In of Deputy’s Patrol Vehicle in Lake Mary”Latest Incident Report
clickorlando.com2026-04-11

SEMINOLE COUNTY, Fla. -- A South Florida man was arrested after showing an A.I.-generated video to a deputy at a store in Lake Mary last month, according to the Seminole County Sheriff's Office.

In a release, the SCSO said that the deputy was inside Academy Sports along Lake Emma Road, which is when the man --- identified as Alexis Martínez-Arizala, 25 --- approached him.

"(Martínez-Arizala) claimed he had observed multiple people entering the deputy's marked patrol vehicle in the parking lot and presented a video on his cell phone as evidence," the release reads. "The video, approximately three seconds in length, appeared to show two individuals entering the patrol vehicle parked near the store."

But when the deputy checked his patrol car, he found that nothing had been disturbed or stolen, investigators noted.

Instead, store surveillance footage showed that no one had actually approached the patrol car during that timeframe, and deputies later concluded that the cell phone video had been fabricated.

Thus, a warrant was issued for Martínez-Arizala's arrest, and he was taken into custody on Wednesday after being located in San Juan, Puerto Rico.

"Investigators also learned Martínez-Arizala posted content related to the encounter on his social media accounts in an apparent attempt to gain attention and create viral content," the release continues.

Now, Martínez-Arizala faces charges of fabricating physical evidence, making a false report to law enforcement, unlawful use of a two-way communication device, and knowingly giving false information to a law enforcement officer concerning the alleged commission of a crime.

He is set to be extradited to Seminole County, where he'll be held on $7,000 bond.

"The misuse of artificial intelligence to create deepfake videos is a growing concern, particularly when it targets public safety professionals," Sheriff Dennis Lemma said. "These fabricated videos can damage reputations, create unnecessary tensions, and raise real safety concerns for the first responders who serve our communities. As this technology becomes more accessible, we take these types of crimes seriously and will take action to protect those who are targeted in our community, including both private citizens and the public safety professionals who work every day to keep our residents safe."

Read More
Loading...
US appeals court fines lawyers $30,000 in latest AI-related sanction

Incident 1447: Sixth Circuit Sanctioned Lawyers in Whiting v. City of Athens over Alleged Fake Appellate Citations in Briefs Reportedly Bearing Hallmarks of Hallucinations

“US appeals court fines lawyers $30,000 in latest AI-related sanction”
reuters.com2026-04-11

March 16 (Reuters) - An appeal containing fake case citations that misrepresent the law can be dismissed as frivolous, a U.S. federal appeals court panel said in a decision sanctioning ​two attorneys who submitted filings that bore hallmarks of artificial intelligence "hallucinations."

The Cincinnati-based 6th U.S. Circuit Court of Appeals ‌said in its order, opens new tab on Friday that attorneys Van Irion and Russ Egli "sullied the reputation of our bar, which now must litigate under the cloud of their conduct."

The court said it found more than two dozen fake citations and misrepresentations of fact in the appeal, which involved an incident at a fireworks ​show hosted by the city of Athens, Tennessee.

The appeals court in a prior order in the case asked the attorneys ​how they vetted their briefs for accuracy and whether they used generative AI to write the filings. ⁠The attorneys did not answer the court's questions about AI, and instead challenged the lawfulness of the order.

The two attorneys must reimburse ​Athens for its legal work on the appeal, and also must individually pay $15,000 each to the appeals court as a punitive sanction, ​according to the order.

Egli and Irion in a statement on Tuesday said they "categorically" deny the court's allegations of citing fake cases, and also contend they were denied a meaningful chance to respond to the panel's questions.

"We are pursuing all available legal remedies to challenge this procedurally deficient order and defend the ​integrity of the judicial process," the lawyers' statement said. Irion told Reuters that "the Circuit Court is ignoring its own rules, and clerks ​are signing substantive orders without authorization."

Athens Mayor Larry Eaton in a statement on Monday said the appeals court in a related order upheld the dismissal ‌of several ⁠lawsuits against the city over the 2022 fireworks event. Eaton called the decision "reassuring."

The sanctions decision comes as more courts grapple with fake case citations and other errors attributable to generative artificial intelligence platforms, which sometimes fabricate information. Lawyers are not prohibited from using AI tools but are bound to safeguard the accuracy of their submissions, and dozens of attorneys have been sanctioned in recent years for submitting AI-generated ​material that they failed to vet.

Irion ​and Egli had contested, opens new tab the appeals ⁠court's demand for details about how they prepared their filings partly on the grounds that doing so would violate protections for attorneys' work-product and communications with clients.

The 6th Circuit panel, Circuit Judges John ​Bush, Jane Branstetter Stranch and Eric Murphy, said "whether and how the briefs were cite-checked does not ​implicate conversations regarding ⁠legal advice."

"Most litigants caught submitting fake cases have apologized and sought forgiveness, rightly recognizing the seriousness of their misconduct," Bush wrote for the panel.

The judges said by contrast "Irion and Egli scolded this court and accused it of engaging in a vast conspiracy to harass them."

The case is ⁠Whiting v. ​City of Athens, 6th U.S. Circuit Court of Appeals, No. 25-5424.

Read More
Loading...
Ohio man becomes first to be convicted under new AI statute for sexually explicit images

Incident 1448: Ohio Man Pleaded Guilty after Prosecutors Alleged He Used AI to Create and Distribute Nonconsensual Intimate-Image Forgeries Including CSAM in Harassment Campaign

“Ohio man becomes first to be convicted under new AI statute for sexually explicit images”
theguardian.com2026-04-11

An Ohio man pleaded guilty on Tuesday to cybercrimes involving real and AI-generated "sexually explicit images", becoming what the Department of Justice claims is the first person convicted under a new federal AI statute.

James Strahler II, 37, admitted to cyberstalking, producing obscene visual representations of child sexual abuse, and publication of digital forgeries. The last charge relates to the Take It Down Act, which "prohibits non-consensual online publication of intimate visual depictions and AI forgeries".

"We believe Strahler is the first person in the United States to be convicted under the Take It Down Act," Dominick Gerace II, US attorney for the southern district of Ohio, said. "We will not tolerate the abhorrent practice of posting and publicizing AI-generated intimate images of real individuals without consent."

"We are committed to using every tool at our disposal to hold accountable offenders like Strahler, who seek to intimidate and harass others by creating and circulating this disturbing content."

Donald Trump signed the Take It Down Act into law last May. Melania Trump, the first lady, lobbied lawmakers to pass the legislation and symbolically signed it.

The law prohibits anyone from "knowingly" publishing or threatening to publish intimate images, including AI-made "deepfake" images, without consent. Social media companies and websites must remove violating content within 48 hours following a victim's request.

Prosecutors said Strahler sent harassing messages to at least six adult females, including both real and AI-created nude images of them, from December 2025 to June 2025.

Strahler purportedly used AI to make pornographic videos showing at least one adult victim engaging in sexual activity with her father and "distributed those videos to the victim's co-workers".

Strahler, according to prosecutors, sent messages to the mothers of these women and demanded nude pictures of them, "threatening to circulate explicit or obscene images he created of their daughters if they did not comply".

"He often called the victims and left voicemails of him masturbating or threatening rape," prosecutors said.

Prosecutors also said Strahler published AI-generated obscene material involving children, "using the faces of minor boys from his community".

He would then purportedly put the minors' faces on to the bodies of adults or other children and make obscene videos with AI. In total, prosecutors said, Strahler created "more than 700 images of both real victims and animated persons and posted them to a website dedicated to child sexual abuse".

Strahler's lawyer did not immediately respond to a request for comment.

Read More
Loading...
US court rules against S Korean gaming company and its AI-hatched takeover plan

Incident 1449: Delaware Court Found Krafton Followed Most of ChatGPT's Recommendations in Campaign that Wrongfully Terminated Unknown Worlds Executives and Seized Operational Control

“US court rules against S Korean gaming company and its AI-hatched takeover plan”
reuters.com2026-04-11

WILMINGTON, Delaware, March 16 (Reuters) - A Delaware judge ordered on Monday that South Korean game developer Krafton Inc (259960.KS), opens new tab reinstate the head of ‌one of its video game studios, ruling he had been improperly removed as part of a takeover plan hatched by ChatGPT.

Krafton's CEO Changhan Kim had largely followed the advice of AI tool ChatGPT during a $250 million dispute with the leaders of the "Subnautica" game maker ​Unknown Worlds Entertainment, which Krafton had acquired, according to the ruling by Vice Chancellor Lori Will of the Court ​of Chancery in Delaware.

Businesses and governments are scrambling for new ways to use artificial intelligence, and ⁠the technology has been blamed for mass layoffs, fears of autonomous weapons and concerns about civil rights. Companies caught in takeover-related legal battles ​often spend millions of dollars on teams of attorneys and advisors from top-flight Wall Street firms.

Krafton said in a ​statement that it disagreed with the ruling and was evaluating its options and remained focused on delivering the best possible game for fans. The company said it was working "tirelessly" to strengthen the "Subnautica" sequel and prepare it for early access release.

Attorneys for the studio leadership did ​not immediately respond to a request for comment.

The dispute stems from Krafton's acquisition of Unknown Worlds Entertainment for $500 million ​up front in 2021. Krafton agreed the studio would remain independent and that its leadership -- co-founders Charlie Cleveland and Max McGuire ‌and CEO ⁠Ted Gill -- would retain operational control and could only be fired for cause, according to Will's ruling. If the company met certain targets, Krafton would pay what is known as an earnout worth up to $250 million.

As the studio last year was ramping up to release "Subnautica 2," internal projections showed it would trigger the earnout, according to the ruling. ​Krafton's CEO Kim feared he ​was caught in a "pushover" deal ⁠and in June turned to ChatGPT to get out of it.

"Over the next month, Krafton followed most of ChatGPT's recommendations," Will wrote in her opinion.

As the chatbot suggested, the ​company formed an internal task force to negotiate a new deal or execute a ​takeover of ⁠the studio. It also outlined specific actions, including a communications strategy focused on fan trust, securing publishing rights over "Subnautica 2" and preparing "systematic material of legal defense."

Unable to get the leadership to renegotiate the earnout, Krafton removed them, alleging they deceived the ⁠company about ​the diminishing amount of time they were spending at the studio, ​a claim that the judge rejected.

Will ordered operational control be returned to Gill, the CEO of the studio. She also extended the period in ​which the earnout criteria could be met.

Read More
Loading...
"We would have liked to take a clearer stance" – This is how the CDU internally explains the deepfake affair.

Incident 1445: Lower Saxony CDU Employee Allegedly Shared Sexualized Purported Deepfake of Colleague in Internal WhatsApp Group

“"We would have liked to take a clearer stance" – This is how the CDU internally explains the deepfake affair.”
welt.de2026-04-06

The Lower Saxony CDU is reeling from a deepfake scandal. State chairman Sebastian Lechner has addressed members in a letter. He wants to "communicate proactively," but for crucial reasons, this is not possible.

In the affair surrounding the alleged creation of a sexualized AI video within the Lower Saxony CDU parliamentary group, state chairman Sebastian Lechner has addressed party members in a letter. In the internal letter, which WELT has obtained, Lechner describes the revelation of the allegations as "shocking news for all of us," leaving them "speechless."

Nevertheless, the party "proactively communicated" the matter and published a statement. "We would have liked to have taken a clearer stance on the matter and provided further background information," Lechner continued, "but this was not possible for legal reasons."

A media law firm has been engaged, and the decision has been made "to protect the personal rights of all involved" to refrain from issuing any further statements for the time being. Because: The parliamentary group, as an employer, has a duty of care towards its employees, "even if misconduct may have occurred."

Regarding further action, Lechner writes: "Everything must be investigated transparently and completely." They will cooperate fully with the justice system. "We will continue to draw all necessary conclusions. To be perfectly clear: There is zero tolerance for misogynistic thinking and behavior in the CDU," says Lechner. There are "obviously still shortcomings" on the path to developing the Lower Saxony CDU into a "modern and innovative party"—the current scandal has "made that more than clear."

Now he wants to "use the crisis as an opportunity," says Lechner, without elaborating. For this, "the expertise of the many women" in the party will be needed, in particular.

The employee in question was dismissed without notice.

The background to the affair: On the afternoon of January 17, 2026, a message was received on the mobile phones of several employees of the Lower Saxony CDU parliamentary group. In a shared private WhatsApp chat group, a senior employee of the parliamentary group posted an AI-generated short video depicting another senior female employee of the group in a clearly sexualized manner—a so-called deepfake.

Two months later, this message triggered a political scandal in the state parliament in Hanover. According to information obtained by WELT, parliamentary group leader Sebastian Lechner and parliamentary managing director Carina Hermann only recently learned of the video and cut short their Easter holidays. After several crisis meetings in Hanover, the employee who apparently created the video was suspended the same day and has since been dismissed without notice. He did not respond to multiple inquiries from WELT.

A spokesperson for the Hanover public prosecutor's office confirmed the incident and a review of the matter to the German Press Agency (dpa). However, no formal investigation has yet been launched.

The public prosecutor's office has reportedly reviewed the short video. "There's a woman dancing in a bikini. This woman looks like a staff member of the CDU parliamentary group," said a spokesperson for the authority. The video is "clearly an AI-generated montage." It is suspected that a real image of the woman was digitally inserted into the video. Software exists that makes this possible.

An investigation against the alleged creator of the video, an employee of the CDU parliamentary group, is not currently underway, the spokesperson added. There is no evidence of defamation in the video itself – however, there is suspicion of a violation of copyright law if the woman's photo was altered. This is a so-called private prosecution offense, the public prosecutor explained. The public prosecutor's office can only take action if a formal complaint is filed.

So far, no formal complaint has been filed, the spokesperson said. The three-month period for filing such a complaint has not yet expired – the period begins from the moment the affected party becomes aware of the content. Should a formal complaint be filed by then, the public prosecutor's office will review the case again, it was stated.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – November and December 2025 and January 2026

By Daniel Atherton

2026-02-02

Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    763
  • 🥈

    Anonymous

    157
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    867
  • 🥈

    Anonymous

    243
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    2991
  • 🥈

    Anonymous

    934
  • 🥉

    Khoa Lam

    417
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 70bfe3d