Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
17-yr-old suspected of carrying out cyberattack with AI help in Japan

Incident 1295: Japanese Teen Allegedly Uses AI-Generated Program to Breach Kaikatsu Frontier and Leak Data of 7.3 Million Customers

“17-yr-old suspected of carrying out cyberattack with AI help in Japan”Latest Incident Report
mainichi.jp2025-12-06

TOKYO (Kyodo) -- A 17-year-old boy was served an arrest warrant on Thursday on suspicion of breaching the server of a major internet cafe operator in Japan using a program generated by a conversational artificial intelligence, an investigative source said.

The move comes after Kaikatsu Frontier Inc., the operator of Kaikatsu Club internet cafe and FiT24 fitness gym chains, suffered a cyberattack in January, with its parent company announcing the attack may have resulted in the leakage of personal information of 7.3 million customers.

The high school student in Osaka is suspected of sending unauthorized commands to Kaikatsu Frontier's server some 7.24 million times to export personal data, thereby obstructing its business operations, the source said.

Although AI services generally do not generate content that could be used for crimes, the student is believed to have concealed his true intention when prompting the AI.

He had been arrested by Tokyo police in November for allegedly ordering a Pokemon card online using a third person's credit card information.

Read More
Loading...
Việt Nam arrests three Republic of Korea men linked to deepfake romance-scam ring

Incident 1291: South Korean Fraud Ring Allegedly Used Deepfake Identities to Traffic Victims into Cambodia Scam Operations

“Việt Nam arrests three Republic of Korea men linked to deepfake romance-scam ring”
vietnamnews.vn2025-12-06

ĐÀ NẴNG --- Police in central Việt Nam have arrested three nationals from the Republic of Korea who are accused of running a cross-border romance-scam network that used deepfake technology and allegedly trafficked victims into forced online fraud schemes.

According to a statement from the police on Friday, the three men -- Choi Minsu, 26, Choi Jinwoo, 25, and Seo Jaeseok, 28 -- were detained at a hotel in Đà Nẵng City late November after fleeing Cambodia, where they are suspected of operating a scam hub in the border town of Bavet in Cambodia.

Interpol had issued Red Notices for all three at RoK's request on charges of fraud, organised crime and human trafficking.

Vietnamese police said the group used deepfake technology to pose as women and establish romantic contact with male victims online.

Once trust was built, the victims were encouraged to sign up for paid travel missions with promises of free flights, accommodation and reimbursement after completion.

The men then coaxed victims to travel to Việt Nam or Thailand before persuading them to cross into Cambodia.

Upon arrival, the victims' passports were confiscated and they were detained and forced to memorise scripted scam pitches used in online fraud operations.

Authorities estimate the network scammed roughly US$1 million between August and December 2024.

The three suspects crossed illegally into Việt Nam in mid-October 2025 and went into hiding.

Đà Nẵng's police, working with the Ministry of Public Security, tracked them to a hotel in the city's An Hải Ward and made the arrests on October 28.

The men have since been handed over to RoK authorities for extradition, police said. --- VNS

Read More
Loading...
Glasgow man admits creating deepfake nudes of former school friend

Incident 1292: Glasgow Man Allegedly Used AI Tool to Create and Share Non-Consensual Deepfake Nude Images of Former Classmate

“Glasgow man admits creating deepfake nudes of former school friend”
heraldscotland.com2025-12-06

A man from Glasgow has become one of the first people from Scotland to be convicted of distributing pornography created using 'deepfake' AI technology. 

Callum Brooks, 25, altered two social media pictures posted by a woman he knew from his school days, and sent them on to friends. 

He used the software to create naked images of the woman from fully-clothed pictures. One of the photos showed the woman from the waist up, while the other was full length. 

Brooks used photoshop software to make the woman appear in a "state of undress" without her knowledge or consent.

Brooks pleaded guilty at Glasgow Sheriff Court to disclosing a photograph of the woman in an intimate situation without her consent.

The 25-year-old, of Glasgow's Easterhouse, claimed that there was "no sexual element" to his actions which was accepted by the Crown.

The first offender stated that he carried out the crime to show what the software did.

The case is one of the first of its kind to appear in a Scottish court.

The court heard that Brooks and the woman had been friends at high school and remained in contact on social media.

Prior to the crime, Brooks obtained a software application which altered images using artificial intelligence (AI).

Prosecutor Katie Malcolm said: "Brooks took two images from the woman's Instagram account which she uploaded online in which she was fully clothed.

"Brooks then altered the images so that they showed the woman to be in a state of undress.

"One was a full length image and the other was from the waist up. Brooks then sent the images to two friends."

The hearing was told that the woman "knew nothing" about the images. She was asked by one of Brooks' friends in February 2024 about her knowledge of the images which were shown to her.

Miss Malcolm added: "She was immediately embarrassed and humiliated. She confirmed that the images were lifelike and the matter was reported to the police."

Brooks was arrested and told officers: "It was really just pettiness on the part of people - I know I shouldn't have done that, I shouldn't have done Photoshop."

He stated that one of his friends requested the images while the other did not.

Gerry Devaney, defending, told the court that Brooks had fell out with one of his friends after the images had been sent.

The lawyer added: "The friend drew the attention of the woman who was unaware of the situation and the friend showed it to her.

"The intention was to show what the software did - it was fairly novel at the time and he took advantage which he much regrets.

"There was no significant sexual element - this was the behaviour on the part of a boy towards people he thought were his friends and he has come to regret it."

Sheriff Anna Reid fined Brooks £335.

Read More
Loading...
AI-generated scam videos using Cyprus president and celebrities dupe victims out of thousands

Incident 1293: Purported Deepfake Impersonating Cyprus President Nikos Christodoulides Reportedly Defrauded Citizens of Thousands of Euros

“AI-generated scam videos using Cyprus president and celebrities dupe victims out of thousands”
in-cyprus.philenews.com2025-12-06

AI-generated videos impersonating President Nicos Christodoulides, business tycoons and celebrities have scammed Cypriots out of thousands of euros, with authorities warning citizens have no protection against the growing threat.

Around 15 people in Cyprus lost between €10,000 and €15,000 after falling for a deepfake video that showed Christodoulides alongside politicians Averof Neophytou and Haris Georgiades promoting a fake investment platform promising prosperity for all.

Giorgos Michailidis, Commissioner of the Digital Security Authority, said no tools or mechanisms exist to protect people from malicious AI.

"It's very difficult for someone to protect themselves. They need to be very observant. Look at the hands, the fingers, for example. It's already very difficult, and I believe in the coming years it will become even more difficult as artificial intelligence develops and gets better and better," Michailidis said.

Scammers have deployed high-profile figures to lend credibility to their schemes across several months.

In March, Sir Stelios Haji-Ioannou, founder of easyGroup, warned the public about a cryptocurrency scam using his image and company name.

The misleading online advert urged people to register with a fake company called "easyProfit", offering returns of up to 96 per cent from cryptocurrencies and foreign exchange trading.

Singer Anna Vissi and European Central Bank President Christine Lagarde also appeared in similar investment scheme videos.

In July, businessman John Christodoulou posted a scam video on his personal account showing him urging people to participate in frauds. "It's not me," he clarified.

In August, a fake Facebook page impersonating Phileleftheros used Greek presenter Sakis Tanimanidis to urge citizens to download an app promising large financial returns.

At Easter, a video circulated showing Donald Trump wishing Cypriots "good souvla" and promising to drag the Turks into the sea, all spoken in English. MP Christos Christofidis said countless people sent him the video, enthusiastic that Trump would save Cyprus.

"You can't imagine how many people sent it to me excited that Trump would come save us," Christofidis said.

AI scams can take many forms, Michailidis said. They can be voices in phone calls, videos, photos, written text in chats or other messages. Someone communicating on Instagram or Facebook might be talking to AI rather than a real person.

Even experts lack tools or mechanisms to detect them, he said.

During recent major earthquakes in Turkey, a photo circulated of a Greek rescuer holding a child he had supposedly just saved. But anyone looking at the rescuer's hands would notice he had six fingers, Michailidis said.

No manual exists for what exactly people should look for, he said. They need to watch for anything that doesn't look right.

The European Union has opened discussions on mandatory labelling for AI-generated content, Michailidis said. But this only applies to legitimate AI tools used in good faith. Anyone using it maliciously to deceive people won't comply with labelling requirements.

Citizens must verify information, he said. "It would be good for people to call back to make sure."

AI technologies are advanced, Michailidis said. Someone can pay a subscription to acquire malware and viruses to attack a person or company. "It's unbelievable how much criminality in the AI sector has advanced," he said.

Social engineering through AI is happening more frequently, Michailidis said. These are actions through AI aimed at manipulating people, influencing them psychologically and directing them towards specific actions and behaviours. Something like this appears to have happened in the US presidential election that elected Donald Trump, as well as in the UK's Brexit referendum, he said. A state could be behind such actions, not just fraudsters.

The most important action now is informing citizens, Michailidis said. "People must not believe what they see, even if it looks real. They need to check it," he said.

"If we're not careful, we're at the mercy of artificial intelligence. We need to check and search. It depends on us," he said.

Christofidis recently submitted a law proposal criminalising the use of people in AI material without the affected person's consent.

"Recently I saw a video with a well-known presenter talking to people about a weight loss product. I watched it three times -- it was so believable -- to conclude it was AI-edited. If today we can tell, in one or two years when it's perfected, it will be impossible to realise," Christofidis said.

A gap exists in legislation to protect citizens, he said.

The European Union must require social media platforms to label AI material, Christofidis said. He expressed concern that propaganda videos and revenge videos will follow.

Read More
Loading...
New York Times Sues A.I. Start-Up Perplexity Over Use of Copyrighted Work

Incident 1294: The New York Times Sued Perplexity for Allegedly Using Copyrighted Content and Generating False Attributions

“New York Times Sues A.I. Start-Up Perplexity Over Use of Copyrighted Work”
nytimes.com2025-12-06

The New York Times claimed in a lawsuit on Friday that its copyrights were repeatedly violated by Perplexity, an artificial intelligence start-up that has built a cutting-edge internet search engine.

The Times said in its lawsuit that it had contacted Perplexity several times over the past 18 months, demanding that the start-up stop using the publication's content until the two companies negotiated an agreement. But Perplexity continued to use The Times's material.

The suit, filed in federal court in New York, is the latest in a growing legal battle between copyright holders and A.I. companies that includes more than 40 cases around the country. On Thursday, The Chicago Tribune filed a suit against Perplexity, accusing it of copyright infringement. And last year, Dow Jones, owner of The Wall Street Journal, The New York Post and other publications, made similar claims in a lawsuit against the start-up.

The Times's suit is the second it has filed against A.I. companies. In 2023, The Times sued OpenAI and its partner Microsoft, arguing that the companies trained their A.I. systems using millions of Times articles without offering compensation. Microsoft and OpenAI, the maker of the chatbot ChatGPT, have disputed the claims.

Perplexity, a San Francisco company founded in 2022 by a former OpenAI engineer and other entrepreneurs, operates a search engine powered by the same type of A.I. technology that underpins ChatGPT.

The suit accuses Perplexity of violating The Times's copyrights in several ways, most notably when the start-up's search engine retrieves information from a website or database and uses that information to generate a piece of text and to respond to queries from internet users. That would not be a fair use of that material, the suit claimed, because Perplexity grabbed large chunks of the publication's content --- in some cases, entire articles --- and provided information that directly competed with what The Times offered its readers.

"Perplexity provides commercial products to its own users that substitute for The Times, without permission or remuneration," the suit said.

The Times also accused Perplexity of damaging its brand. In some cases, the suit said, Perplexity's search engine made up information --- what A.I. researchers call "hallucination" --- and falsely attributed that information to The Times.

"Publishers have been suing new tech companies for a hundred years, starting with radio, TV, the internet, social media and now A.I.," Jesse Dwyer, Perplexity's head of communication, said in a statement to The Times. "Fortunately, it's never worked, or we'd all be talking about this by telegraph."

Of the more than 40 suits that have been filed by copyright holders against A.I. companies over the past four years, most are still winding their way through the courts. In September, Anthropic, an OpenAI rival, agreed to pay book authors and publishers $1.5 billion after a judge ruled that the company, which is based in San Francisco, had illegally downloaded and stored millions of copyrighted books as it built its A.I. systems.

In May, The Times struck a multiyear deal with Amazon to license its editorial content for use in the tech giant's artificial intelligence platforms. It was The Times's first licensing arrangement involving generative A.I. No financial terms were disclosed.

Amazon will use material from The Times's food and recipe site as well as content from the publication's sports site, The Athletic, and Times material will be used to train Amazon's A.I. models.

Many other news organizations have signed similar deals with A.I. companies, including OpenAI and Microsoft.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – August, September, and October 2025

By Daniel Atherton

2025-11-08

At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...

Read More
The Database in Print

Read about the database on the PAI Blog, Vice News, Venture Beat, Wired, arXiv , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    652
  • 🥈

    Anonymous

    154
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    743
  • 🥈

    Anonymous

    230
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    2915
  • 🥈

    Anonymous

    967
  • 🥉

    1

    587
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e59d373