Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Megan Thee Stallion wins defamation suit against blogger

Incident 1297: Blogger Milagro Gramz Allegedly Promoted AI-Generated Pornographic Deepfake Targeting Megan Thee Stallion

“Megan Thee Stallion wins defamation suit against blogger”Latest Incident Report
nbcnews.com2025-12-12

MIAMI --- A federal jury sided with Megan Thee Stallion on Monday and agreed that a blogger defamed and harassed the rapper --- but limited damages to less than six figures.

The federal panel of five men and four women came down in favor of Megan, the artist born Megan Pete, in her lawsuit against blogger Milagro Gramz, whose real name is Milagro Cooper.

The suit was over material Cooper posted after R&B musician Tory Lanez shot Megan five years ago.

Jurors initially awarded Megan $75,000, which U.S. District Judge Cecilia M. Altonaga reduced to $59,000 in a final ruling filed on Tuesday.

"We're thankful for the jury's commitment to reinforcing the importance of truth, accountability and responsible commentary on social media," the plaintiff's attorney Mari Henderson said in a statement.

"Not only is Milagro being held accountable for paying Megan compensatory and punitive damages, but Florida's fee-shifting legal provision will require her to cover costs of Megan's legal bills on the deepfake claim. This verdict sends a clear message that spreading dangerous misinformation carries significant consequences."

The artist didn't answer reporters' questions as she left court on Monday.

Cooper didn't appear to be particularly pleased or distressed by the panel's findings.

"I'm not ecstatic," Cooper said. "Of course you want things to go your way, but like I said, I respect the jury and what they decided."

Her attorney, Jeremy McLymont, said he's grateful that jurors didn't award seven digits of damages.

"We respect the jury's verdict. You always have to, right? Whether we agree with it or not, we respect it," he said.

"At the end of the day it was not a complete win for any side."

Megan accused Cooper of being a "mouthpiece," a "puppet" and a "paid surrogate" for Lanez, who shot her in Los Angeles on July 15, 2020.

Jurors found that Cooper willfully encouraged her thousands of followers on X and Instagram to view a sexually explicit deepfake video of Megan that had been circulating on social media.

"In sum, the jury's Verdict supports entry of judgment for Plaintiff on her claims of intentional infliction of emotional distress and promotion of an altered sexual depiction," Judge Altonaga wrote.

"Plaintiff is entitled to $59,000 in damages --- the amount of compensatory and punitive damages by the jury on those two claims."

Lanez shot Megan in the foot after the pair left a party at Kylie Jenner's home in 2020.

A Los Angeles jury found Lanez, whose real name is Daystar Peterson, guilty of three felony charges on Dec. 23, 2022: assault with a semiautomatic firearm, carrying a loaded, unregistered firearm in a vehicle and discharging a firearm with gross negligence.

He was sentenced to up to 10 years behind bars.

Megan said she has been struggling with mental health issues since the shooting and the harassment from the blogger.

Read More
Loading...
Hong Kong activist ‘terrified’ as sexually explicit deepfake images sent to UK neighbours

Incident 1301: Purported AI-Generated Sexual Deepfakes Allegedly Deployed in Transnational Harassment Campaign Targeting Hong Kong Exiles

“Hong Kong activist ‘terrified’ as sexually explicit deepfake images sent to UK neighbours”
hongkongfp.com2025-12-12

A Hong Kong activist living in exile in the UK said Thursday she was "shocked and terrified" after discovering her neighbours received letters containing AI-generated sexually explicit images of her.

Carmen Lau, 30, a pro-democracy activist and former councillor in Hong Kong, said she believes the letters, which are being investigated by the police, were the latest attempt by China to intimidate her.

Lau, like several other high-profile Hong Kong refugees in the UK, found out first earlier this year that her neighbours had been sent letters encouraging them to turn her in to the Chinese embassy, offering rewards of tens of thousands of pounds.

On November 11 Joshua Reynolds, her local MP for the town of Maidenhead, west of London, told her he had been informed by residents that "new letters had been placed on their doorsteps," Lau told AFP.

The envelopes bore a stamp from the Chinese region of Macau, she said.

AFP has not seen a copy of the letters.

"There are five images having my face on it and either they're naked or just wearing underwear and like, they are acting or signalling that I am a sex worker providing services," Lau said.

The letters also included personal information such as Lau's name, height, weight and a note that read like an advertisement for a sex worker, "kind of like welcoming people to come to my flat", she said.

"I was shocked and terrified, but at the same time, I was quite angry about it," she said in the telephone interview.

"Even till now I'm quite hesitant to head to the town because, you know, Maidenhead is such a small town, and I have no idea how widely the images were circulated, and how people would think."

She has since moved out of the property, as she did after the first set of letters, but she said the "psychological burden" was the worst thing.

She said it was "psychological warfare to stop us from what we've been doing".

In December 2024 Hong Kong police announced bounties of HK$1 million (around US$130,000) for information leading to the arrest of Lau and five other activists based overseas and accused of national security crimes.

'Malicious communications'

Political dissent in Hong Kong has been quashed since Beijing imposed a sweeping national security law in 2020.

Reynolds, who confirmed Lau's account, said "Beijing's efforts" to intimidate and threaten Hong Kongers like her are "utterly grotesque".

"The government cannot stick its head in the sand when it comes to the safety of Hong Kongers in our country," he told AFP.

"It must take decisive action to show we won't tolerate Beijing's campaign of international repression."

A spokesperson for the Chinese embassy in the UK dismissed the allegations.

"We have read the report," the spokesperson said. "The story sounds weird," he said, adding that it was only Lau's account.

A Thames Valley Police spokesperson said officers were investigating reports of a "malicious communications offence" involving "digitally altered images".

A UK government spokesperson said: "The safety and security of Hong Kongers in the United Kingdom is of the utmost importance."

"We encourage anyone to report concerns to the police."

Read More
Loading...
Don't fall for photo showing Trump using walker

Incident 1302: Reported Viral AI-Generated Photo Purportedly Shows Donald Trump Using a Walker

“Don't fall for photo showing Trump using walker”
snopes.com2025-12-12

Claim:

An image posted online in mid-December 2025 authentically showed U.S. President Donald Trump using a walker.

Rating:[

Fake

Fake

In mid-December 2025, an image (archived) circulated online purportedly showing U.S. President Donald Trump using a walker as a mobility aid.

Keith Edwards, a Democratic political strategist, posted the picture on X, writing: "BREAKING: an image has leaked showing Trump using a walker moments after he signed an executive order banning states from regulating AI"

Edwards previously led communications for The Lincoln Project, a group of former Republicans who oppose the president.

The image circulated mostly on X (archived), but also appeared on Facebook (archived), Threads (archived) and Bluesky (archived). Likewise, numerous Snopes readers contacted us to ask whether it was genuine.

In short, Snopes found no evidence that the image of Trump using a walker as a mobility aid was authentic. It was created using artificial intelligence software; therefore, we have rated it as fake.

Social media users posted exchanges (archived, archived) with Google's AI assistant Gemini that appeared to state that the picture contained SynthID, a digital watermark Google developed that embeds directly into the pixels of an image, making it imperceptible to the human eye but detectable by Google's tools.

Gemini confirmed it found SynthID in the image of Trump and the walker when Snopes repeated this process. Its message read: "Based on the detection, it appears that most or all of this image was edited or generated with Google AI."

(Google Gemini)

Another popular fake image (archived) circulating online, which showed Trump using a gold walker, included Gemini's watermark in the bottom-right corner.

Snopes reached out to Edwards, who appeared to first share the image, and the White House for comment. We await replies to our queries. 

Sightengine and Hive Moderation, two other online AI-detectors, both found the picture was highly likely to be AI-generated. 

(Sightengine/Hive Moderation/Snopes Illustration)

Searching for "walker OR mobility OR aid, 'Donald Trump,'" uncovered no examples (archived) of reputable news media outlets reporting on the image as authentic in the past week (Edwards used the word "breaking," suggesting the fake photo was new). In fact, numerous news media outlets also debunked the picture.

The claim circulated as Trump signed an executive order to create a "minimally burdensome national standard" to regulate AI companies in the U.S. The executive order said letting states individually legislate on AI, as was previously the case, created "a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups."

Some social media users appeared to interpret Edwards' post as a joke (archived, archived); however, at the time of this writing, he had not replied to a question from Snopes about whether he intended the post as satire.

For further reading, Snopes previously investigated a picture purportedly showing the president sitting next to his daughter Ivanka Trump with his tongue out and a video of him falling asleep during a White House Cabinet meeting.

Read More
Loading...
USGS says Nevada quake report was false, blaming automatic system's error

Incident 1303: USGS ShakeAlert System Reportedly Generated False Earthquake Alert Affecting Nevada and California

“USGS says Nevada quake report was false, blaming automatic system's error”
reuters.com2025-12-12

The United States Geological Survey said on Thursday its automatic earthquake detection system had erroneously sent out a report earlier in the day that a 5.9 magnitude temblor had struck in Nevada, saying it was likely the first time the USGS had issued a quake report that was completely bogus.

Yaareb Altaweel, a geophysicist with the survey's National Earthquake Information Center in Golden, Colorado, said the USGS was still trying to understand why its automatic ShakeAlert early warning system sent out the erroneous alert of a quake centered near Dayton, Nevada.

The report prompted cell phones in the San Francisco Bay area, some 180 miles away, to buzz with a warning to residents to "DROP! COVER! HOLD ON!," according to a Reuters witness.

"To my knowledge we've never had a false alert like this," Altaweel said. "The system has put out incorrect information on exaggerated magnitude or depth, which was corrected immediately by the on-duty seismologist. I have seen that happen before."

Angie Lux, a project scientist for earthquake early warning at the Berkeley Seismological Laboratory, also said that a false alert of a major quake had never occurred in the past to her knowledge.

Lux did not know what may have triggered the false alert and said it would take some time to understand what happened.

Lux said the cause may have been "noisy triggers," referring to localized noise from natural or human-made sources, such as explosions, construction work, industrial activities, or even large vehicles such as a train or truck. Natural phenomena for such triggers include wind and ocean waves.

Multiple law enforcement agencies in cities and counties near the reported epicenter confirmed they had felt no shaking or any other signs of an earthquake.

A magnitude 5.9 quake typically is strong enough to cause slight property damage and pronounced shaking, according to the Michigan Tech Earthquake Magnitude Scale.

Read More
Loading...
Home Office admits facial recognition tech issue with black and Asian subjects

Incident 1305: UK Facial Recognition System Reportedly Exhibits Higher False Positive Rates for Black and Asian Subjects

“Home Office admits facial recognition tech issue with black and Asian subjects”
theguardian.com2025-12-08

Ministers are facing calls for stronger safeguards on the use of facial recognition technology after the Home Office admitted it is more likely to incorrectly identify black and Asian people than their white counterparts on some settings.

Following the latest testing conducted by the National Physical Laboratory (NPL) of the technology’s application within the police national database, the Home Office said it was “more likely to incorrectly include some demographic groups in its search results”.

Police and crime commissioners said publication of the NPL’s finding “sheds light on a concerning inbuilt bias” and urged caution over plans for a national expansion.

The findings were released on Thursday, hours after Sarah Jones, the policing minister, had described the technology as the “biggest breakthrough since DNA matching”.

Facial recognition technology scans people’s faces and then cross-references the images against watchlists of known or wanted criminals. It can be used while examining live footage of people passing cameras, comparing their faces with those on wanted lists, or be used by officers to target individuals as they walk by mounted cameras.

Images of suspects can also be run retrospectively through police, passport or immigration databases to identify them and check their backgrounds.

Analysts who examined the police national database’s retrospective facial recognition technology tool at a lower setting found that “the false positive identification rate (FPIR) for white subjects (0.04%) is lower than that for Asian subjects (4.0%) and black subjects (5.5%)”.

The testing went on to find that the number of false positives for black women was particularly high. “The FPIR for black male subjects (0.4%) is lower than that for black female subjects (9.9%),” the report said.

The Association of Police and Crime Commissioners said in a statement that the findings showed an inbuilt bias. It said: “This has meant that in some circumstances it is more likely to incorrectly match black and Asian people than their white counterparts. The language is technical but behind the detail it seems clear that technology has been deployed into operational policing without adequate safeguards in place.”

The statement, signed off by the APCC leads Darryl Preston, Alison Lowe, John Tizard and Chris Nelson, questioned why the findings had not been released at an earlier opportunity or shared with black and Asian communities.

It said: “Although there is no evidence of adverse impact in any individual case, that is more by luck than design. System failures have been known for some time, yet these were not shared with those communities affected, nor with leading sector stakeholders.”

The government announced a 10-week public consultation that it hopes will pave the way for the technology to be used more often. The public will be asked whether police should be able to go beyond their records to access other databases, including passport and driving licence images, to track down criminals.

Civil servants are working with police to establish a new national facial recognition system that will hold millions of images.

Charlie Whelton, a policy and campaigns officer for the campaign group Liberty, said: “The racial bias in these stats shows the damaging real-life impacts of letting police use facial recognition without proper safeguards in place. With thousands of searches a month using this discriminatory algorithm, there are now serious questions to be answered over just how many people of colour were falsely identified, and what consequences this had.

“This report is yet more evidence that this powerful and opaque technology cannot be used without robust safeguards in place to protect us all, including real transparency and meaningful oversight. The government must halt the rapid rollout of facial recognition technology until these are in place to protect each of us and prioritise our rights – something we know the public wants.”

The former cabinet minister David Davis raised concerns after police leaders said the cameras could be placed at shopping centres, stadiums and transport hubs to hunt for wanted criminals. He told the Daily Mail: “Welcome to big brother Britain. It is clear the government intends to roll out this dystopian technology across the country. Something of this magnitude should not happen without full and detailed debate in the House of Commons.”

Officials say the technology is needed to help catch serious offenders. They say there are manual safeguards, written into police training, operational practice and guidance, that require all potential matches returned from the police national database to be visually assessed by a trained user and investigating officer.

A Home Office spokesperson said: “The Home Office takes the findings of the report seriously and we have already taken action. A new algorithm has been independently tested and procured, which has no statistically significant bias. It will be tested early next year and will be subject to evaluation.

“Given the importance of this issue, we have also asked the police inspectorate, alongside the forensic science regulator, to review law enforcement’s use of facial recognition. They will assess the effectiveness of the mitigations, which the National Police Chiefs’ Council supports.”

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – August, September, and October 2025

By Daniel Atherton

2025-11-08

At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...

Read More
The Database in Print

Read about the database on the PAI Blog, Vice News, Venture Beat, Wired, arXiv , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    661
  • 🥈

    Anonymous

    154
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    752
  • 🥈

    Anonymous

    231
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    2935
  • 🥈

    Anonymous

    968
  • 🥉

    1

    587
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5440a2a