Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
US appeals court fines lawyers $30,000 in latest AI-related sanction

Incident 1447: Sixth Circuit Sanctioned Lawyers in Whiting v. City of Athens over Alleged Fake Appellate Citations in Briefs Reportedly Bearing Hallmarks of Hallucinations

“US appeals court fines lawyers $30,000 in latest AI-related sanction”Latest Incident Report
reuters.com2026-04-11

March 16 (Reuters) - An appeal containing fake case citations that misrepresent the law can be dismissed as frivolous, a U.S. federal appeals court panel said in a decision sanctioning ​two attorneys who submitted filings that bore hallmarks of artificial intelligence "hallucinations."

The Cincinnati-based 6th U.S. Circuit Court of Appeals ‌said in its order, opens new tab on Friday that attorneys Van Irion and Russ Egli "sullied the reputation of our bar, which now must litigate under the cloud of their conduct."

The court said it found more than two dozen fake citations and misrepresentations of fact in the appeal, which involved an incident at a fireworks ​show hosted by the city of Athens, Tennessee.

The appeals court in a prior order in the case asked the attorneys ​how they vetted their briefs for accuracy and whether they used generative AI to write the filings. ⁠The attorneys did not answer the court's questions about AI, and instead challenged the lawfulness of the order.

The two attorneys must reimburse ​Athens for its legal work on the appeal, and also must individually pay $15,000 each to the appeals court as a punitive sanction, ​according to the order.

Egli and Irion in a statement on Tuesday said they "categorically" deny the court's allegations of citing fake cases, and also contend they were denied a meaningful chance to respond to the panel's questions.

"We are pursuing all available legal remedies to challenge this procedurally deficient order and defend the ​integrity of the judicial process," the lawyers' statement said. Irion told Reuters that "the Circuit Court is ignoring its own rules, and clerks ​are signing substantive orders without authorization."

Athens Mayor Larry Eaton in a statement on Monday said the appeals court in a related order upheld the dismissal ‌of several ⁠lawsuits against the city over the 2022 fireworks event. Eaton called the decision "reassuring."

The sanctions decision comes as more courts grapple with fake case citations and other errors attributable to generative artificial intelligence platforms, which sometimes fabricate information. Lawyers are not prohibited from using AI tools but are bound to safeguard the accuracy of their submissions, and dozens of attorneys have been sanctioned in recent years for submitting AI-generated ​material that they failed to vet.

Irion ​and Egli had contested, opens new tab the appeals ⁠court's demand for details about how they prepared their filings partly on the grounds that doing so would violate protections for attorneys' work-product and communications with clients.

The 6th Circuit panel, Circuit Judges John ​Bush, Jane Branstetter Stranch and Eric Murphy, said "whether and how the briefs were cite-checked does not ​implicate conversations regarding ⁠legal advice."

"Most litigants caught submitting fake cases have apologized and sought forgiveness, rightly recognizing the seriousness of their misconduct," Bush wrote for the panel.

The judges said by contrast "Irion and Egli scolded this court and accused it of engaging in a vast conspiracy to harass them."

The case is ⁠Whiting v. ​City of Athens, 6th U.S. Circuit Court of Appeals, No. 25-5424.

Read More
Loading...
Ohio man becomes first to be convicted under new AI statute for sexually explicit images

Incident 1448: Ohio Man Pleaded Guilty after Prosecutors Alleged He Used AI to Create and Distribute Nonconsensual Intimate-Image Forgeries Including CSAM in Harassment Campaign

“Ohio man becomes first to be convicted under new AI statute for sexually explicit images”
theguardian.com2026-04-11

An Ohio man pleaded guilty on Tuesday to cybercrimes involving real and AI-generated "sexually explicit images", becoming what the Department of Justice claims is the first person convicted under a new federal AI statute.

James Strahler II, 37, admitted to cyberstalking, producing obscene visual representations of child sexual abuse, and publication of digital forgeries. The last charge relates to the Take It Down Act, which "prohibits non-consensual online publication of intimate visual depictions and AI forgeries".

"We believe Strahler is the first person in the United States to be convicted under the Take It Down Act," Dominick Gerace II, US attorney for the southern district of Ohio, said. "We will not tolerate the abhorrent practice of posting and publicizing AI-generated intimate images of real individuals without consent."

"We are committed to using every tool at our disposal to hold accountable offenders like Strahler, who seek to intimidate and harass others by creating and circulating this disturbing content."

Donald Trump signed the Take It Down Act into law last May. Melania Trump, the first lady, lobbied lawmakers to pass the legislation and symbolically signed it.

The law prohibits anyone from "knowingly" publishing or threatening to publish intimate images, including AI-made "deepfake" images, without consent. Social media companies and websites must remove violating content within 48 hours following a victim's request.

Prosecutors said Strahler sent harassing messages to at least six adult females, including both real and AI-created nude images of them, from December 2025 to June 2025.

Strahler purportedly used AI to make pornographic videos showing at least one adult victim engaging in sexual activity with her father and "distributed those videos to the victim's co-workers".

Strahler, according to prosecutors, sent messages to the mothers of these women and demanded nude pictures of them, "threatening to circulate explicit or obscene images he created of their daughters if they did not comply".

"He often called the victims and left voicemails of him masturbating or threatening rape," prosecutors said.

Prosecutors also said Strahler published AI-generated obscene material involving children, "using the faces of minor boys from his community".

He would then purportedly put the minors' faces on to the bodies of adults or other children and make obscene videos with AI. In total, prosecutors said, Strahler created "more than 700 images of both real victims and animated persons and posted them to a website dedicated to child sexual abuse".

Strahler's lawyer did not immediately respond to a request for comment.

Read More
Loading...
"We would have liked to take a clearer stance" – This is how the CDU internally explains the deepfake affair.

Incident 1445: Lower Saxony CDU Employee Allegedly Shared Sexualized Purported Deepfake of Colleague in Internal WhatsApp Group

“"We would have liked to take a clearer stance" – This is how the CDU internally explains the deepfake affair.”
welt.de2026-04-06

The Lower Saxony CDU is reeling from a deepfake scandal. State chairman Sebastian Lechner has addressed members in a letter. He wants to "communicate proactively," but for crucial reasons, this is not possible.

In the affair surrounding the alleged creation of a sexualized AI video within the Lower Saxony CDU parliamentary group, state chairman Sebastian Lechner has addressed party members in a letter. In the internal letter, which WELT has obtained, Lechner describes the revelation of the allegations as "shocking news for all of us," leaving them "speechless."

Nevertheless, the party "proactively communicated" the matter and published a statement. "We would have liked to have taken a clearer stance on the matter and provided further background information," Lechner continued, "but this was not possible for legal reasons."

A media law firm has been engaged, and the decision has been made "to protect the personal rights of all involved" to refrain from issuing any further statements for the time being. Because: The parliamentary group, as an employer, has a duty of care towards its employees, "even if misconduct may have occurred."

Regarding further action, Lechner writes: "Everything must be investigated transparently and completely." They will cooperate fully with the justice system. "We will continue to draw all necessary conclusions. To be perfectly clear: There is zero tolerance for misogynistic thinking and behavior in the CDU," says Lechner. There are "obviously still shortcomings" on the path to developing the Lower Saxony CDU into a "modern and innovative party"—the current scandal has "made that more than clear."

Now he wants to "use the crisis as an opportunity," says Lechner, without elaborating. For this, "the expertise of the many women" in the party will be needed, in particular.

The employee in question was dismissed without notice.

The background to the affair: On the afternoon of January 17, 2026, a message was received on the mobile phones of several employees of the Lower Saxony CDU parliamentary group. In a shared private WhatsApp chat group, a senior employee of the parliamentary group posted an AI-generated short video depicting another senior female employee of the group in a clearly sexualized manner—a so-called deepfake.

Two months later, this message triggered a political scandal in the state parliament in Hanover. According to information obtained by WELT, parliamentary group leader Sebastian Lechner and parliamentary managing director Carina Hermann only recently learned of the video and cut short their Easter holidays. After several crisis meetings in Hanover, the employee who apparently created the video was suspended the same day and has since been dismissed without notice. He did not respond to multiple inquiries from WELT.

A spokesperson for the Hanover public prosecutor's office confirmed the incident and a review of the matter to the German Press Agency (dpa). However, no formal investigation has yet been launched.

The public prosecutor's office has reportedly reviewed the short video. "There's a woman dancing in a bikini. This woman looks like a staff member of the CDU parliamentary group," said a spokesperson for the authority. The video is "clearly an AI-generated montage." It is suspected that a real image of the woman was digitally inserted into the video. Software exists that makes this possible.

An investigation against the alleged creator of the video, an employee of the CDU parliamentary group, is not currently underway, the spokesperson added. There is no evidence of defamation in the video itself – however, there is suspicion of a violation of copyright law if the woman's photo was altered. This is a so-called private prosecution offense, the public prosecutor explained. The public prosecutor's office can only take action if a formal complaint is filed.

So far, no formal complaint has been filed, the spokesperson said. The three-month period for filing such a complaint has not yet expired – the period begins from the moment the affected party becomes aware of the content. Should a formal complaint be filed by then, the public prosecutor's office will review the case again, it was stated.

Read More
Loading...
“Roll over, you b*tch”… KBS’s live broadcast disaster featuring AI profanity subtitles

Incident 1446: KBS AI Translation Subtitles Reportedly Broadcast Profanity During Artemis II Launch Livestream

““Roll over, you b*tch”… KBS’s live broadcast disaster featuring AI profanity subtitles”
mk.co.kr2026-04-06

KBS has issued an official apology regarding the incident in which profanity was displayed in subtitles during the live broadcast of the Artemis 2 launch.

On the 2nd, KBS stated on its YouTube community page, "During the live broadcast of NASA, some words were incorrectly translated into profanity due to similar pronunciations during the real-time automatic translation process using AI." The company added, "We sincerely apologize to our viewers for the exposure of incorrect phrases containing profanity."

Previously, KBS used AI automatic translation while live-streaming the Artemis 2 launch on the 2nd. During the automatic translation process, the AI mistranslated aviation technical terms such as "roger" (acknowledging reception), "roll," and "pitch" as "Roger, Gulleo, Ix-ah," resulting in the mis-translation of these expressions.

The footage of the incident quickly spread online. Netizens reacted with comments such as, “It is shocking that a broadcasting station funded by taxpayers’ money would use subtitles containing profanity,” “Even for a live broadcast, there is a line that must be crossed,” and “Shouldn’t they have at least put in place some safeguards, even if it is AI translation?”

As the controversy escalated, KBS quickly posted an apology and promised to prevent a recurrence. KBS added, “We have completed measures to prevent the re-exposure of the translation error in question. We are closely consulting with relevant departments and partner companies to prevent similar incidents from happening again and are preparing improvement measures, such as strengthening AI profanity filtering.”

Meanwhile, Artemis-2 was successfully launched from the Kennedy Space Center in Florida, USA, at 6:35 p.m. (Eastern Time) on the 1st. It is scheduled to return to the Pacific Ocean near San Diego after flying for approximately 10 days.

Read More
Loading...
Amazon blames human employees for an AI coding agent’s mistake

Incident 1442: Kiro AI Coding Tool Was Reportedly Implicated in 13-Hour AWS Cost Explorer Outage in Mainland China

“Amazon blames human employees for an AI coding agent’s mistake”
theverge.com2026-04-05

Amazon Web Services suffered a 13-hour outage to one system in December as a result of its AI coding assistant Kiro's actions, according to the Financial Times. Numerous unnamed Amazon employees told the *FT *that AI agent Kiro was responsible for the December incident affecting an AWS service in parts of mainland China. People familiar with the matter said the tool chose to "delete and recreate the environment" it was working on, which caused the outage.

While Kiro normally requires sign-off from two humans to push changes, the bot had the permissions of its operator, and a human error there allowed more access than expected.

Amazon described the December disruption as an "extremely limited event" that pales in comparison to a major outage in October, which took down online services, like Alexa, Fortnite, ChatGPT, and Amazon for hours. An outage that didn't trap anyone in their smart bed is something of a lucky escape.

It is not the only time AI coding tools have caused problems for Amazon. A senior AWS employee said the December outage is the second production outage linked to an AI tool in the last few months, with another linked to Amazon's AI chatbot Q Developer. The employee described the outages as "small but entirely foreseeable." Amazon said the second incident did not impact a "customer facing AWS service."

Amazon blames human error for the problems, not the rogue bot, and said it has "implemented numerous safeguards" like staff training following the incident. The company said it's a "coincidence that AI tools were involved" and insists that "the same issue could occur with any developer tool or manual action." That's true, and though I'm not an engineer, I'd guess one wouldn't deliberately scrap and rebuild something to make a change in all but the most dire of circumstances.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – November and December 2025 and January 2026

By Daniel Atherton

2026-02-02

Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    761
  • 🥈

    Anonymous

    157
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    865
  • 🥈

    Anonymous

    243
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    3207
  • 🥈

    Anonymous

    994
  • 🥉

    1

    587
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd