Welcome to theAI Incident Database
Incident 1241: Purported AI-Generated Video Reportedly Used in RM5,800 (~$1,400) Sextortion Attempt Targeting Malaysian Minor via Telegram
“AI Deepfake Extortion Targets Children. A Mother’s Warned After Her Son Was Threatened up to RM5,800”Latest Incident Report
Parents, please beware and look after your children! A shocking new scam is targeting children, using their own photos and AI-generated videos to demand money. A terrifying new scam is spreading across Facebook and this time, the victims aren't adults, but innocent children. A local family learned a frightening lesson about modern scams late last night when their teenage son received a threatening message containing his photo and an AI-generated explicit video. The attackers demanded RM5,800 and pressured the boy to pay. This is a new and disturbing twist in online extortion that parents and young people must urgently guard against.
The incident began around 11:40 p.m. when the boy received a message on Telegram app. The sender had no phone number but immediately posted a personal photo of him together with a doctored video. Frightened but level-headed, the boy did not panic; instead he reached out to a trusted school contact, who had experience helping another student who'd faced a similar attack.
"He stayed calm and asked for advice," the boy's mother said. She only learned about the incident the next morning. She rushed to the police station to file a report as soon as she could.
Police officers told the family that this is becoming an increasingly common scheme. Scammers are all using AI deepfake tools to create fake "hamsap" (explicit) clips that look real and then threaten victims, including minors with exposure unless they pay. Officers warned that people of any age can be targeted and urged families to report incidents immediately. According to the mother, the extortionist repeatedly tried to scare her son. The attackers demanded RM5,800 and threatened to release the fabricated footage if the boy didn't comply. "They tried to frighten him, but we went straight to the police," she said. She added a plea to other parents: don't let fear force you into paying.
A warning and a call to action
This family's experience shows how quickly technology that should benefit us can be weaponized. Scammers are becoming bolder and more sophisticated; their targets now include children. Parents and guardians should have open conversations with their children about online safety. Especially about what to do if they receive threatening messages or strange files.
If you or someone you know experiences similar harassment: document everything, do not pay, and report it to the police and the platform. The most important step is not to face it alone. Take care of your family and share to article!
Incident 1232: Reportedly Fatal Xiaomi SU7 Ultra Crash in Chengdu Purportedly Involves Automated Driving Failure and Door Lock Malfunction
“Xiaomi shares see biggest drop since April after fatal EV crash sparks safety concerns”
Chinese tech giant Xiaomi saw its shares fall over 5% on Monday, following reports that the doors of one of its electric vehicles failed to open after a fiery crash in China that left one person dead.
The stock slid as much as 8.7% in Hong Kong, marking its steepest drop since April, before paring losses after images and video of a burning Xiaomi SU7 sedan in Chengdu circulated on Chinese social media.
Video and eyewitness accounts showed bystanders trying but failing to open the doors of the burning car to rescue an occupant. Personnel at the scene eventually used a fire extinguisher to put out the blaze, local reports said.
Chengdu police said the crash occurred after the SU7 collided with another sedan, killing a 31-year-old male driver who was suspected of driving under the influence of alcohol.
Xiaomi, which manufactures consumer electronics, software and electric vehicles, did not immediately respond to CNBC's request for comment.
The latest incident follows a fatal SU7 crash earlier this year that raised questions about the vehicle's smart driving features and sent Xiaomi's shares tumbling.
The crash could also intensify scrutiny on electronic door handles, a design popularized by Tesla and now common in modern EVs.
Unlike mechanical models, electronic door handles rely on sensors and electricity and may fail during a fire or power outage.
China is considering a ban on such electric door handles to address safety risks linked to the feature, state-backed media reported in late September.
Meanwhile, the U.S. National Highway Traffic Safety Administration has launched an investigation into about 174,000 Tesla Model Y vehicles after reports of door handle failures.
Incident 1231: Purported AI-Generated Deepfake Video Reportedly Depicts Senator Chuck Schumer Endorsing Government Shutdown in NRSC Campaign Ad
“A GOP attack ad deepfakes Chuck Schumer with AI”
A new attack ad from Senate Republicans uses Sen. Chuck Schumer's real words about the government shutdown --- but in an AI deepfake of the Democratic Senate Minority Leader.
The 30-second video posted on X and YouTube by the National Republican Senatorial Committee on Friday raised alarms among many observers who warned it crossed a new boundary in politics and could unleash a flood of AI-generated deepfake attack ads.
Captioned on X "Week 3 of the Schumer Shutdown: 'Every day gets better for us'," the video shows an AI-generated version of Schumer repeatedly saying those words and grinning. The video concludes with Schumer smiling as a narrator says, "The Schumer shutdown is making things worse across America and Democrats love it."
"Every day gets better for us" was part of a quote Schumer gave to the politics site Punchbowl News in an Oct. 9 article about the shutdown. He continued: "It's because we've thought about this long in advance and we knew that health care would be the focal point on Sept. 30 and we prepared for it ... Their whole theory was --- threaten us, bamboozle us, and we would submit in a day or two."
However, the original story only included the text of Schumer's statement, not video or audio.
An NRSC press release announcing the ad describes it as "visualizing" Schumer's comment and says it was "created in-part using artificial intelligence software and includes an AI disclaimer."
The fake footage of Schumer includes a small transparent watermark of the NRSC logo and the words "AI GENERATED" in the bottom right-hand corner. On YouTube, the video carries an additional "Altered or synthetic content" label saying "Sound or visuals were significantly edited or digitally generated."
"These are Chuck Schumer's own words," NRSC Communications Director Joanna Rodriguez said in an email to NPR. "We know Schumer wishes people didn't know he said this, but he did, and this video is a way for voters to see and hear how proud he is of his shutdown."
Schumer's office did not respond to a request for comment.
Even though the quote was something Schumer did in fact say, the use of AI to invent fake video footage crosses a line, said Hany Farid, a professor at the University of California at Berkeley who studies manipulated media.
"If they wanted to use the quote then they could have shown an image of Schumer and overlaid the quote, as is often done. It cannot reasonably be argued that creating a deepfake of Senator Schumer is the same thing," he said.
In addition, Farid said that the disclaimer on the lower right corner of the video "is not a particularly clear or obvious disclosure, particularly when most users are doom scrolling through their social media posts."
The NRSC video was also criticized by journalists and other commenters on social media, including reporters at right-leaning publications.
"The Schumer bit is AI generated video --- which is disclosed at bottom right corner --- but we are at the doorstep of a terrifying new world," Jon Levine, a political reporter at the Washington Free Beacon, wrote on X.
"NRSC has increasingly leaned into AI for its ads, but this is the first time I've seen them go so far as to use it for a fake video of an opponent. This is a slippery slope, even w/the tiny AI disclosure in the corner," Ramsey Touchberry, a congressional reporter at the Washington Examiner, posted on X.
Amid the criticism, the NRSC's Rodriguez wrote on her own X account: "AI is here and not going anywhere. Adapt & win or pearl clutch & lose."
It's not the first time Republicans have used AI to impersonate Schumer and other Democrats. On the eve of the shutdown, President Trump posted on Truth Social an apparently AI-generated, profanity-laced video fabricating Schumer saying "nobody likes Democrats anymore." House Minority Leader Hakeem Jeffries is also shown in the video wearing a cartoonish sombrero and mustache.
But unlike the NRSC's video of Schumer, earlier examples were easy to spot as inauthentic.
Farid warned that the growing prevalence of AI fakes in politics could have a corrosive effect.
"While I don't think our leaders should be posting deceptive deepfakes, they also run the risk that when they post real content, there will be good reason for the public to question its authenticity," he said.
Incident 1234: Purported AI-Generated Explicit Deepfakes of Sydney High School Students Reportedly Circulated Online
“Police investigating reports of explicit deepfake images of girls from Sydney school”
Police are investigating reports that digitally altered explicit images using the faces of female students from a Sydney high school have been circulated online.
The ABC understands a number of families attended Eastwood Police Station on Wednesday evening.
Police confirmed in a brief statement that an investigation was underway.
"Officers attached to Ryde Police Area Command have commenced an investigation," a spokesperson said.
"Inquiries are ongoing and there is no further information available at this time."
The ABC has been told parents went to police after a male student who was sent the images reported it to his school.
A Department of Education spokesperson said the school was working closely with police on this matter.
"Deepfakes present significant new risks to the wellbeing and privacy of students," they said.
"If any student is found to have engaged in this behaviour, the school will be taking strong disciplinary action."
The department requested the ABC not identify the school.
'Deeply concerning,' minister says
Acting Education Minister Courtney Houssos said the incident would be discussed with the Department of Education secretary on Thursday, and at the national education ministers meeting on Friday.
"These are deeply concerning reports that I have been briefed on," she said.
"This isn't a problem that we're just facing in New South Wales, this is nationwide but it's appropriate for police to be investigating this matter."
Ms Houssos said schools were "microcosms of what we're seeing in the broader community".
"This is a challenge that we're facing across the nation, I think it's important that we consider that specifically in our schools," she said.
Ms Houssos said supports would be put in place at the school in question.
"These are really concerning allegations and we'll make sure we get to the bottom of it."
NSW deepfake laws strengthened
Legislation passed NSW parliament last month outlawing the use of artificial intelligence (AI) to create intimate images of a real, identifiable person without their consent.
The change to the Crimes Act 1900 made the production of sexually explicit deepfakes of a person punishable by up to three years' jail.
At the time, Attorney-General Michael Daley told the ABC the laws were designed to keep up with emerging technology and stamp out the tools of sexual exploitation.
He also highlighted concern about its use in school communities.
"These aren't just creating cartoons. They are really harmful ways of acting that often happen to young people at schools," he said in September.
"We've had young people, women who've committed suicide because of this. So the government takes this very, very seriously."
Federal Communications Minister Anika Wells signalled last month the government would launch a crackdown on AI technology such as "nudify" apps.
Incident 1236: Quantum AI Scam Reportedly Used AI-Generated Celebrity Endorsements and Spoofed Media Sites to Solicit Investments
“Which? finds deepfake investment scams on YouTube”
Which? has repeatedly warned about the global deepfake investment scams being run by an outfit called 'Quantum AI'. Using deepfake videos of celebrity figures, Quantum AI peddles enticing investment offers on social media, this time on YouTube.
Tempted by the endorsement of a respected celebrity, you might be lured into visiting one of Quantum AI's websites. When you get there you're asked to invest, with the promise of large returns for little to no effort.
You may see profits initially, but then quickly find you'll never be able to withdraw any of your earnings or get your investments back. It's a dangerous scam that can see people lose large sums of money.
In a recent survey, the open banking payments provider Yaspa reported that one in 10 Brits lost money to investment scams. They found that four in 10 never recovered the money they lost after falling victim to a scam, and the average amount lost to scams was £765.
Read on to find out what we found on YouTube and how to stay safe from scams.
Investment scam ads
Which? found eight dodgy videos on YouTube that promoted the scam Quantum AI investment platform, which claims to enable you to invest in the stock market using artificial intelligence (AI). These videos had amassed over 17,000 views at the time of writing.
All of the videos we came across led to websites asking for your first name, last name, phone number and email address.
This is the same information scammers tried to glean in our original investigation into Quantum AI, carried out last year for Which? Tech magazine. We found that once we gave contact information to the scammers, they bombarded us with several daily calls from spoofed numbers coming from all over the world, and each time asked us to make investments of £200 to £250.
We traced one of the YouTube videos back to a generic video of a newsreader presenting breaking news.
Which? contacted YouTube about these videos, and it said: 'Phishing is prohibited on YouTube and when we find content that violates our policies, we take action.' It removed all eight videos that we shared with it.
A thread on Reddit has also shared warnings of these scams. One Reddit user warned of a Quantum AI scam video advert found on YouTube, which impersonated the Prime Minister, Sir Keir Starmer, and Sky News. The video appeared to include a QR code, which reportedly led to a website impersonating the BBC, featuring a fake article with the Prime Minister promoting the benefits of investing with Quantum AI.
We believe the scammers had taken a genuine interview and manipulated it to make it look and sound as if the Prime Minister was promoting Quantum AI's platform.
Countless adverts, emails and calls
Relentlessly targeting people with ads, emails and calls appears to be the tactic of Quantum AI's scam.
One person who'd shared evidence with us for our investigation last year told us how they've continued to be targeted by the fraudsters behind the investment scam. He told us that an advert on YouTube popped up while he was browsing on his phone, the ad impersonated Martin Lewis, Steph McGovern and the BBC. He said it led to a fake BBC copycat website asking for a name, email address and phone number to sign up.
He told us he continues to be contacted by Quantum AI scammers via email to this day.
After being scammed, the fraudsters may contact you again, suggesting that you can help recover your lost funds. This is known as a recovery scam, where scammers pose as being able to get you your money back, only to scam you again.
Avoiding investment scams
Earlier this year, the Financial Conduct Authority (FCA) put out a warning about Quantum AI, stating that it may be providing or promoting financial services or products without the FCA's permission.
Check that a firm is regulated by the FCA by searching the firm's name on the FCA register. All scam platforms will be unregulated, but some may copy the details of a genuine firm, so always research the company before parting with any sensitive information or money.
It's also best to avoid unrealistic offers presented in social media ads or posts from people promoting investment opportunities, especially if it comes from an unofficial account. Read our Which? guides on investments and investing.
Always be suspicious of entering your personal information into websites you've never visited before, and use a tool like TinEye or Google's reverse image search (by selecting the image icon on the right of the search bar on Google) to check if an image has been used elsewhere.
You can also check if a website has been recently created by typing it into the Who.is website to get details on its registration -- new websites are usually a sign of a scam.
If you lose any money to a scam, call your bank immediately using the number on the back of your bank card and report it to Action Fraud, or call the police on 101 if you're in Scotland.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – June and July 2025
By Daniel Atherton
2025-08-02
Garden at Giverny, J.L. Breck, 1887 🗄 Trending in the AIID Across June and July 2025, the AI Incident Database added over sixty new incident...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor
