Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
'Spicy' AI chatbot and image generator left millions of photos exposed

Incident 1284: Secret Desires AI Platform Reportedly Exposed Nearly Two Million Sensitive Images in Cloud Storage Leak

“'Spicy' AI chatbot and image generator left millions of photos exposed”Latest Incident Report
mashable.com2025-11-25

A platform that promises "spicy AI chatting" left nearly two million images and videos, many of them showing private citizens, exposed to the public, 404 Media reported.

Secret Desires, an erotic chatbot and AI image generator, left cloud storage containers of photos, women's names, and other personal information like workplaces and universities, vulnerable, according to 404 Media.

This "massive leak" is the latest case of people using generative AI tools to turn innocent photos into nonconsensual explicit deepfakes.

Some of the photos and videos were taken from real influencers, public figures, and non-famous women. The latter category includes Snapchat screenshots and at least one yearbook photo. Some of the exposed media included user-generated AI images, such as those created with a now-defunct "faceswap" feature, which Secret Desires removed earlier this year.

Like Character.AI or Replika, Secret Desires allows users to create AI personas and chat with them. While pornographic content isn't allowed on Character.AI (and is only allowed for certain Replika users), Secret Desires says it "provides limitless intimacy and connection" on its Quick Start Guide.

As 404 Media found, the AI-generated media found in the vulnerable storage containers were mostly explicit. Some of the file names included terms like "17-year-old."

The company didn't respond to 404 Media's request for comment, but the files became inaccessible around an hour after the publication reached out.

For years, women and girls have been victims of explicit deepfakes, which are AI-generated content. Many deepfakes are women's likenesses "faceswapped" onto pornographic videos. This applies to celebrities like Taylor Swift as well as women who are not famous. This also happens to girls, creating online child sex abuse material.

This year, Congress passed the Take It Down Act to combat deepfake images. The law proved controversial, as several free speech and advocacy groups claim that it can be weaponized against consensual explicit material or political speech.

Read More
Loading...
“Phantom Hacker” Scam Steals ₹8,350 Crore Using AI Deepfakes

Incident 1280: Reported Use of AI Voice and Identity Manipulation in the Ongoing 'Phantom Hacker' Fraud Scheme

““Phantom Hacker” Scam Steals ₹8,350 Crore Using AI Deepfakes”
varindia.com2025-11-23

A dangerous new cybercrime wave known as the "Phantom Hacker" scam has stolen over ₹8,350 crore (US$1 billion)from elderly victims worldwide---mostly in the United States---within just one year. Criminals impersonate tech support agents, bank officials, and government investigators to convince victims that their bank accounts or devices have been hacked.

The scam unfolds in three manipulative phases. First, fraudsters trigger fear with fake security alerts or pop-ups claiming malware infections or suspicious activity. Next, they pressure victims to install remote-access software, pretending it is for protection but actually gaining full control of the victim's device. Finally, scammers pose as government "rescuers," warning that the bank itself is compromised and instructing victims to transfer money to a so-called safe account---which is entirely controlled by the fraudsters.

Most victims are seniors above 60, often with limited digital literacy, making them vulnerable to intimidation and rapid decision-making traps. AI has drastically accelerated the scam's scale: voice cloning, caller ID spoofing, and realistic deepfake documents make the fraud nearly indistinguishable from legitimate communication. Reports indicate a 600% rise in AI-enabled fraud over the past year.

Cybercrime expert Prof. Triveni Singh warns that modern criminals target "human trust and emotion" rather than systems. He emphasizes that no legitimate institution ever asks for remote access or money transfers over calls or emails, urging people to verify suspicious communications directly through official channels.

The Phantom Hacker scam highlights how technological progress is being weaponized to deceive the most vulnerable. Awareness, digital literacy, and immediate reporting remain the strongest defenses against these evolving threats.

Read More
Loading...
Ottawa couple loses $177,000 in deepfake investment scam

Incident 1276: Ottawa Couple Reportedly Loses CA$177,023 After Purported Deepfake Elon Musk Investment Scam

“Ottawa couple loses $177,000 in deepfake investment scam”
youtu.be2025-11-23

AIID editor's note: View the original source for the video report.

Doug and Victoria Lloyd lost 177,000 dollars in an investment scam involving a deepfake video of Elon Musk. The Eastern Ontario couple share their story as part of our Spot the Scam series while the OPP offers tips on passwords and facial recognition to prevent fraud. CTV's Kimberley Johnson reports.

Read More
Loading...
These Books Were Judged by Their A.I. Covers, and Disqualified

Incident 1282: Reported Disqualification of Two Books from the Ockham New Zealand Book Awards Due to Alleged AI-Generated Cover Art

“These Books Were Judged by Their A.I. Covers, and Disqualified”
nytimes.com2025-11-23

One book cover shows a cat with green eyes baring humanlike teeth. Another has an angel soaring above a yellow freight train.

The authors of the books, which were submitted to one of New Zealand's largest literary competitions, didn't know that the artwork was created using A.I. They found out last week, however, when they were disqualified because the covers had violated the contest's new rule about A.I.-generated material.

The studio that designed the covers defended them, saying that A.I. is part of their creative process. And the independent publisher of the works of fiction said that the contest, the 2026 Ockham New Zealand Book Award, had not given publishers enough time to comply with its new A.I. rules.

The publisher, Quentin Wilson, said in an email on Tuesday that the episode was "heartbreaking" for the two authors, who do not use A.I. in their writing, and upsetting for the production and design teams that worked hard on the books. He added that the rapid rise of A.I. has put the publishing industry in "uncharted waters."

"As an industry, we must work together to ensure that this situation does not happen again," Mr. Wilson said.

The episode is one of many "fronts of chaos" as creative industries try to establish fair and sensible rules for A.I.-generated content, said Oliver Bown, a professor at the University of New South Wales in Australia whose research looks at intersections between art, technology and creativity.

The problem, he said, is that changes come so fast that new regulations are inevitably developed and communicated in a rush.

The disqualification means that the two books by Mr. Wilson's authors --- "Obligate Carnivore," a story collection by Stephanie Johnson; and "Angel Train," four linked novellas by Elizabeth Smither --- can't compete for the contest's fiction prize, worth about $36,000.

The award is considered one of New Zealand's most prestigious literary honors. The New Zealand Book Awards Trust, the organizer, said that it introduced the A.I. regulations to match other international book awards with similar rules.

The covers of "Obligate Carnivore" and "Angel Train" were flagged by a bookseller, the trust said in a statement. After consulting with the publisher and the designers, the trust confirmed that the illustrations had been created with A.I.

But Mr. Wilson said that the contest's A.I. regulations were released in August, too late for publishers to reflect any changes before the October deadline. Both books were published in November.

The designers at Sugarcube Studios, the New Zealand company that made the covers, said in a statement that they saw A.I. "not as a replacement for creativity but as a natural extension of our craft, a new way to explore ideas and enhance our process."

Ms. Johnson, who received New Zealand's 2022 Prime Minister's Award for Literature, said in an email that she had never used A.I. in her writing and never would. She said she was worried about its impacts on the book industry and that she thought there was a desperate need for A.I. regulation.

"All of us around the world who work in the creative sphere are concerned about the rise of AI, so I absolutely understand that the Ockham New Zealand Book Awards people were compelled to draw a line in the sand," she added.

Ms. Smither, who won the 2018 Ockham New Zealand Book Award in the poetry category, said she had not realized the cover of "Angel Train" had been generated with A.I. She said she didn't doubt the integrity of her publisher or the designer.

"The book is not the same as the cover," she said. "Without the cover it would still be a book, and the book comes first."

Read More
Loading...
ChatGPT violated copyright law by ‘learning’ from song lyrics, German court rules

Incident 1278: ChatGPT Reportedly Found to Reproduce Protected German Lyrics in Copyright Case

“ChatGPT violated copyright law by ‘learning’ from song lyrics, German court rules”
theguardian.com2025-11-18

A court in Munich has ruled that OpenAI's chatbot ChatGPT violated German copyright laws by using hits from top-selling musicians to train its language models in what creative industry advocates described as a landmark European ruling.

The Munich regional court sided in favour of Germany's music rights society GEMA, which said ChatGPT had harvested protected lyrics by popular artists to "learn" from them.

The collecting society GEMA, which manages the rights of composers, lyricists and music publishers and has approximately 100,000 members, filed the case against OpenAI in November 2024.

The lawsuit was seen as a key European test case in a campaign to stop AI scraping of creative output. OpenAI can appeal against the decision.

ChatGPT allows users to ask questions and type commands into a chatbot, which responds with text that resembles human language patterns. The model underlying ChatGPT is trained on widely available data.

The case revolved around nine of the most recognisable German hits of recent decades, which were used by ChatGPT to hone its language capabilities.

They included Herbert Grönemeyer's 1984 synth-pop sendup of masculinity, Männer (Men), and Helene Fischer's *Atemlos *Durch die Nacht (Breathless Through the Night), which was the unofficial anthem of the German side during the 2014 football World Cup.

The presiding judge ordered OpenAI to pay undisclosed damages for using copyrighted material without permission.

The GEMA legal adviser Kai Welp said the organisation now hoped to negotiate with OpenAI on how rights holders could be compensated.

The San Francisco-based company, whose founders include Sam Altman and Elon Musk, had said its language learning models absorbed entire training sets of data rather than storing or copying specific songs, the Munich court said.

Because its output is generated by users of the chatbot via their prompts, OpenAI said, they were the ones who should be held legally liable for it -- an argument rejected by the court.

GEMA welcomed the decision as "the first landmark AI ruling in Europe", saying it could have implications for other types of creative output.

Its chief executive, Tobias Holzmüller, said the decision proved "the internet is not a self-service store and human creative achievements are not free templates".

"Today, we have set a precedent that protects and clarifies the rights of authors: even operators of AI tools such as ChatGPT must comply with copyright law. Today, we have successfully defended the livelihoods of music creators."

The Berlin law firm Raue, which represented GEMA, said in a statement that the court's decision "sets an important precedent for the protection of creative works and sends a clear signal to the global tech industry" while creating "legal certainty for creators, music publishers and platforms across Europe".

The ruling "is likely to have an impact far beyond Germany as a precedent", it said.

The German Journalists' Association also hailed the ruling as "a milestone victory for copyright law".

OpenAI said in a statement it would weigh an appeal. "We disagree with the ruling and are considering next steps," it said. "The decision is for a limited set of lyrics and does not impact the millions of people, businesses and developers in Germany that use our technology every day."

It added: "We respect the rights of creators and content owners and are having productive conversations with many organisations around the world, so that they can also benefit from the opportunities of this technology."

OpenAI has faced litigation in the US from authors and media groups claiming ChatGPT has been trained on their work without permission.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – August, September, and October 2025

By Daniel Atherton

2025-11-08

At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...

Read More
The Database in Print

Read about the database on the PAI Blog, Vice News, Venture Beat, Wired, arXiv , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    640
  • 🥈

    Anonymous

    154
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    731
  • 🥈

    Anonymous

    230
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    2895
  • 🥈

    Anonymous

    967
  • 🥉

    1

    590
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 353a03d