Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Mayo scam alert: Fraudsters cloning presidential election candidate in fake investment videos

Incident 1207: Purported AI-Generated Deepfake of Irish Fine Gael Presidential Candidate Heather Humphreys Used in Fake Investment Videos on Meta Platforms

“Mayo scam alert: Fraudsters cloning presidential election candidate in fake investment videos”Latest Incident Report
con-telegraph.ie2025-09-21

Fraudsters are already using deepfake technology to clone the image and voice of presidential election candidate Heather Humphreys in fake investment videos circulating online.

Bank of Ireland is warning consumers to watch out for more of these adverts, mainly originating via Meta platforms, over the coming weeks.

The video falsely portrays Heather Humphreys endorsing a high-return investment scheme, with the content entirely fabricated using AI-driven voice and image cloning.

Nicola Sadlier, Head of Fraud at Bank of Ireland, said: “This is deeply concerning. We are witnessing an ongoing spate of scams that exploit public trust in well-known figures.

"These videos are highly convincing, designed to lure unsuspecting individuals into fraudulent schemes. I would urge the public to remain vigilant, there may be more of these in the coming weeks. If you see such content, do not engage.

“As the EU explores new initiatives to encourage consumers to invest, it must also confront the growing wave of online scams that threaten to undermine public confidence.

"One critical step is holding social media platforms accountable. Before financial services adverts go live, platforms should be required to verify that the advertiser is authorised by a recognised regulatory body.

"This simple check could block thousands of scam promotions from ever reaching the public.”

Consumers are advised to ignore any unsolicited investment offers online, and always verify financial products through their bank or trusted financial advisor. Make sure that the investment firm is regulated by checking the Central Bank of Ireland register (ROI). If a firm is not listed, do not invest. Bank of Ireland offers a 24/7 fraud telephone support for customers on 1800 946 764, available every day of the year.

Read More
Loading...
Ain’t nothing like the real thing: Officials warn against using AI for hunting regs

Incident 1251: Purportedly AI-Generated Hunting Regulation Errors Reportedly Lead to Idaho Citation and Multi-State Warnings from Wildlife Agencies

“Ain’t nothing like the real thing: Officials warn against using AI for hunting regs”
spokesman.com2025-10-29

There is at least one thing artificial intelligence is not ready to replace.

The humble hunting regulation booklet.

Earlier this month, the Idaho Department of Fish and Game issued a warning to hunters that they should not rely on AI to interpret the state’s hunting and fishing regulations, and that game wardens had encountered people in the field who had been fed the wrong information by AI.

Roger Phillips, a spokesperson for Idaho Fish and Game, said Friday that the incident that pushed the agency to issue a warning came in southeast Idaho, where a waterfowl hunter was caught hunting a day early.

When confronted by wardens, the hunter said he had gotten the opening date from an online search.

Phillips said wardens found that the search gave him an AI answer that pulled information from a failed proposal, not the agency’s final decision or its regulations book.

That got Idaho Fish and Game staff interested in what else AI had wrong. They found a handful of examples where the robots fell short. Among them: A search for rules on a specific river in Idaho turned up results for a river with a similar name in Arkansas.

Therein lies the problem, Phillips said.

No matter the parameters of the search, you cannot always trust that AI will pick the right source from the infinite options available in cyberspace.

“It casts a really wide net,” he said.

Wildlife officials in other states have also noticed AI problems. Officials in places like Illinois and Maine have warned people against using AI to answer questions about hunting rules.

Just this week, the Wyoming Game and Fish Department this week said in a news release that the agency has been alerted to “a number of inaccurate AI-generated search results.”

Several of the errors stemmed from failed bills listed on the Wyoming State Legislature’s website, according to the release. Search results created by AI referenced some failed bills as if they had passed and become law.

Aaron Kerr, Wyoming Game and Fish’s law enforcement supervisor, said in the release that relying on bad information does not absolve anyone from the consequences of violating hunting or fishing regulations.

“Hunters and anglers are accountable for following published rules and regulations,” Kerr said. “Failure to follow these carries the potential for fines, jail time, or suspension of hunting and fishing privileges.”

Alan Myers, the Washington Department of Fish and Wildlife’s police captain for Eastern Washington, said he has not heard of any cases of violations involving AI interpretations of Washington’s hunting regulations.

The internet’s stockpile of misinformation predates the rise of AI, though. Myers said there have been cases in the past where a hunter or angler’s defense for a violation was a bad search result or something they’d read on social media.

Phillips blames all of this on the human urge to solve any problem with a Google search.

“When I want to know if a restaurant is about to close, I just Google it,” Phillips said. “We get in the habit of doing these things, and then when we go hunting and fishing we might carry those habits with us.”

Google introduced its AI tool in 2024 and began offering AI-generated overviews in response to many searches. The overviews offer quick answers, but not always the right ones, and not always with the sort of granular detail embedded in hunting and fishing rules.

“When you have a 125-page big game regs booklet that covers controlled hunts and general hunts and archery seasons and muzzleloader seasons and rifle seasons,” Phillips said, “you could see where that would overwhelm AI pretty quickly.”

There is at least one AI tool meant to help people navigate the complexities of hunting regulations.

Called Scout, the tool was created by the International Hunter Education Association. It launched last year, and it lets people ask a chatbot specific questions about hunting rules and regulations in a particular state.

Users pick a state from a dropdown menu and then ask their question. The bot combs rulebooks for a given state, and then gives an answer. The answer also cites the pages the information came from. The booklet itself appears in a PDF viewer below the answer, and the site also lets users download PDFs for any state.

Getting people to use the official source is the only solution. Even the sometimes-faulty AI search results usually include a disclaimer urging people to consult regulation booklets.

No one booklet covers everything a hunter or angler needs to know. Separate publications are created each year for fishing, big game, migratory birds, turkeys and more.

They are free, though, and easy to find online and in real life.

“Any place where licenses and tags are sold,” Phillips said.

Read More
Loading...
Federal judges using AI filed court orders with false quotes, fake names

Incident 1252: Judges in New Jersey and Mississippi Admit AI Tools Produced Erroneous Federal Court Filings

“Federal judges using AI filed court orders with false quotes, fake names”
washingtonpost.com2025-10-29

Two federal judges in New Jersey and Mississippi admitted this month that their offices used artificial intelligence to draft factually inaccurate court documents that included fake quotes and fictional litigants --- drawing a rebuke from the head of the Senate Judiciary Committee.

"I've never seen or heard of anything like this from any federal court," Sen. Chuck Grassley (R-Iowa) said in a Senate floor speech Monday.

The committee announced Thursday that the judges, Henry T. Wingate of the Southern District of Mississippi and Julien Xavier Neals of the District of New Jersey, admitted that their offices used AI in preparing the mistake-laden filings in the summer. They attributed the mistakes to a law clerk and a law school intern, respectively, according to letters the judges sent in response to a Senate inquiry.

Both faulty court documents were docketed and had to be hastily retracted after defendants alerted the judges to the errors. Neither judge explained the cause of the errors until the committee contacted them.

The use of generative artificial intelligence has become more common in the U.S. judicial system. Wingate and Neal join scores of lawyers and litigants who have been rebuked for using AI to produce legal filings strewn with errors.

Legal groups are still catching up. The Administrative Office of the U.S. Courts, which supports the federal court system, issued interim guidance in July that suggests users "consider whether the use of AI should be disclosed" in judicial functions. It has also established a task force to issue additional guidance on AI use in federal courts.

Grassley said Monday that federal courts need to establish rules on AI use in litigation.

"I call on every judge in America to take this issue seriously and formalize measures to prevent the misuse of artificial intelligence in their chambers," he said.

Wingate and Neals said in their letters that they took corrective measures after being alerted to the mistakes and will implement additional reviews of court filings before they are submitted. Neals said he established a written policy in his chambers prohibiting the use of generative AI in legal research or drafting court filings.

Wingate did not immediately respond to a request for comment. Neals's office declined to comment.

Wingate, whom President Ronald Reagan appointed to the court in 1985, was overseeing a case brought by the Jackson Federation of Teachers and other advocacy groups against the Mississippi State Board of Education and other state bodies. The suit challenged a state law banning public schools from teaching "transgender ideology" and "diversity training" on topics of race, gender and sexual orientation.

On July 20, Wingate granted a temporary restraining order that blocked the state from enforcing parts of the ban. Two days later, in a motion to clarify, Mississippi attorneys said Wingate's order was replete with errors.

The order named several plaintiffs and defendants, including a college sorority, a Mississippi parent, students and government officials, who were not parties to the case, according to the Mississippi attorneys' response. The order described allegations that did not appear in the plaintiff's complaint and falsely quoted the bill as being blocked, the attorneys noted. The order also cited declarations from individuals in support of a restraining order that did not exist.

Wingate's office issued a corrected restraining order that evening and told the parties to disregard the previous one. The case is ongoing; Wingate granted a preliminary injunction against the bill in August that Mississippi attorneys appealed.

Neals, who was appointed by President Joe Biden in 2021, issued an opinion with errors in a federal securities class-action lawsuit against CorMedix, a pharmaceutical company, over allegations that it misled investors about a medical product.

On June 30, Neals denied a CorMedix motion to dismiss the lawsuit. About a month later, attorneys for CorMedix wrote that Neals's opinion contained fabricated cases and nonexistent quotes from real cases it cited in support of his ruling. It misstated the outcomes of cases and whether appeals motions to dismiss were granted. It also attributed false quotes to CorMedix, according to the letter.

Neals's opinion was also submitted as "supplemental authority" in support of another class-action lawsuit, whose defendants also raised the issues with his filing, the letter said.

Neals said the opinion was entered in error and removed it from the court docket. The case is ongoing.

The mistakes in both judges' orders were similar to those caused by AI hallucinations --- where generative AI, which produces text by predicting what words follow each other from an analysis of written content, confidently invents facts and false citations --- and observers quickly speculated that the errors had come from AI use.

At first, facing questions from lawyers and litigants, neither judge admitted that the errors were AI-related. Grassley, in his Monday speech, called their "lack of transparency ... breathtaking."

The Senate Judiciary Committee wrote to Neals and Wingate in early October inquiring about the mistakes, it said. Both judges said in their responses that the errors were attributable to AI but that the filings were drafts that were mistakenly published before review.

A law clerk in Wingate's office used the Perplexity AI tool as a "foundational drafting assistant" to synthesize publicly available information on the court docket, Wingate wrote. A law school intern for Neals used ChatGPT to perform legal research, Neals wrote.

(The Washington Post has partnerships with Perplexity and ChatGPT's creator, OpenAI.)

"I manage a very busy docket and strive to maintain the public's trust by administering justice in a fair and transparent manner," Wingate wrote. "Given that I hold myself and my staff to the highest standards of conduct, I do not expect that a mistake like this one will occur in the future."

"While my experience in the CorMedix case was most unfortunate and unforeseeable, I hope that, at the very least, it will inform the [Administrative Office of the Courts] Task Force's continuing work and ultimately lead to new meaningful policies for all federal courts," Neals wrote.

Read More
Loading...
US student handcuffed after AI system apparently mistook bag of chips for gun

Incident 1250: Alleged False Positive by Omnilert AI Gun Detection System Prompts Police Search at Baltimore County High School

“US student handcuffed after AI system apparently mistook bag of chips for gun”
theguardian.com2025-10-24

An artificial intelligence system (AI) apparently mistook a high school student's bag of Doritos for a firearm and called local police to tell them the pupil was armed.

Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him.

"At first, I didn't know where they were going until they started walking toward me with guns, talking about, 'Get on the ground,' and I was like, 'What?'" Allen told the WBAL-TV 11 News television station.

Allen said they made him get on his knees, handcuffed and searched him -- finding nothing. They then showed him a copy of the picture that had triggered the alert.

"I was just holding a Doritos bag -- it was two hands and one finger out, and they said it looked like a gun," Allen said.

Baltimore county high schools last year began using a gun detection system using school cameras and AI to detect potential weapons. If it spots something it believes to be suspicious, it sends an alert to the school and law enforcement officials.

In a letter to school families obtained by WBAL TV 11 News, the school wrote: "We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident. Our counselors will provide direct support to the students who were involved in this incident and are also available to speak with any student who may need support."

Baltimore county police told the outlet: "Officers assigned to Precinct 11-Essex responded to Kenwood High School following a report of a suspicious person with a weapon. Once on scene, the person was searched and it was determined the subject was not in possession of any weapons."

Lamont Davis, Allen's grandfather, told the television station: "Nobody wants this to happen to their child. No one wants this to happen."

 This article was amended on 24 October 2025. An earlier version displayed a picture of a Baltimore city police vehicle, although Baltimore county does not include the city of Baltimore.

Read More
Loading...
Starbuck v. Meta

Incident 1247: Meta AI Reportedly Generated Purportedly False Claims Linking Activist Robby Starbuck to January 6th Riot, Prompting Defamation Lawsuit

“Starbuck v. Meta”
dhillonlaw.com2025-10-22

DOVER, DELAWARE — April 28, 2025 — Today, Dhillon Law Group, on behalf of Robby Starbuck, an American filmmaker, journalist, and activist, filed a defamation lawsuit against Meta Platforms, Inc. (“Meta”) in Delaware Superior Court. The lawsuit alleges that Meta’s artificial intelligence tool, Meta AI, repeatedly published—and continues to publish—provably false and defamatory statements falsely accusing Starbuck of participating in the January 6th Capitol riot and having been arrested for a misdemeanor.

Starbuck, who was at home in Tennessee on January 6, 2021, had never been accused of any crime—let alone the criminal activity Meta AI falsely attributed to him. Despite Starbuck’s direct pleas for Meta to retract and correct the falsehoods, Meta refused to take responsibility. Even after acknowledging receipt of Starbuck’s formal legal demand in August 2024, Meta allowed its AI to continue defaming him for months, eventually expanding the false narrative to include fabricated allegations such as Holocaust denialism and claims that he was unfit to parent his own children.

“This case underscores the profound impact AI wields over Americans’ lives and the urgent need to hold AI companies accountable for recklessly spreading falsehoods,” Krista Baughman, partner at Dhillon Law Group, said. “Through this lawsuit, we aim to establish a precedent for responsible AI development and oversight, safeguarding Mr. Starbuck and all consumers from the devastating consequences of unchecked AI misinformation.”

“Meta’s AI maliciously defamed me, falsely framing me as a criminal and Holocaust denier and even after being notified, Meta continued its character assassination,” Robby Starbuck said. “This nightmare must end. If Meta wants to settle, they must commit to long-term changes to protect everyone from AI-driven defamation. This case draws a clear line: AI must have guardrails. I urge Mark Zuckerberg and Meta’s leadership to search their souls and fix this for the sake of the next generation. They deserve better.”

Meta’s continued publication of these lies has triggered a series of death threats against Starbuck and his family, serious reputational and career harm, and the loss of critical business opportunities. The lawsuit seeks compensatory and punitive damages, as well as injunctive relief to force Meta to remove false information from its AI systems and prevent further harm.

The threat of AI misinformation continues to mount as Meta prepares to expand its AI product offerings to tens of millions of users across the United States

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – June and July 2025

By Daniel Atherton

2025-08-02

Garden at Giverny, J.L. Breck, 1887 🗄 Trending in the AIID Across June and July 2025, the AI Incident Database added over sixty new incident...

Read More
The Database in Print

Read about the database on the PAI Blog, Vice News, Venture Beat, Wired, arXiv , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    613
  • 🥈

    Anonymous

    152
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    702
  • 🥈

    Khoa Lam

    230
  • 🥉

    Anonymous

    225
Total Report Contributions
  • 🥇

    Daniel Atherton

    2812
  • 🥈

    Anonymous

    958
  • 🥉

    Khoa Lam

    456
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65