Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Cybercriminals unleash fake Centrelink scam on vulnerable Australians

Incident 1275: Purportedly AI-Enhanced Phishing Campaign Allegedly Impersonates Australian Government Services in Large-Scale Welfare Scam

“Cybercriminals unleash fake Centrelink scam on vulnerable Australians”Latest Incident Report
smh.com.au2025-11-19

More than 270,000 malicious emails impersonating Services Australia and Centrelink have flooded Australian inboxes in one of the nation's largest phishing campaigns in years, with the sophisticated attacks specifically targeting the country's most vulnerable citizens.

The massive campaign, uncovered by human risk management platform Mimecast, has been bombarding Australians with fake government emails averaging 70,000 messages per month over the past four months, with attackers using artificial intelligence to create near-perfect clones of legitimate government communications.

The emails mimic government correspondence about Medicare, JobSeeker payments, Superannuation, and Family Tax Benefits with alarming accuracy.

"This particular attack is a significant cause for concern," Mimecast senior director Garrett O'Hara told this masthead. "The targeting of the scam is broad and non-specific, so it's impacting everyday Aussies trying to access essential government services, as well as targeting a wide range of organisations including schools, hospitals, law firms, corporations, and even government agencies themselves."

The criminal operation, tracked by Mimecast as MCTO3001, is exploiting trusted email platforms including SendGrid, Mailgun, and Microsoft Office 365 to disguise their origins and evade spam filters.

"These aren't the clumsy scams of years past," O'Hara said. "Attackers are using legitimate systems and leveraging detailed knowledge of Australian benefit systems including superannuation, Medicare, JobSeeker payments, and Family Tax Benefits, to make their emails look authentic. They're exploiting the trust that Australian citizens have in the federal government to deliver their attacks."

The scammers are using advanced evasion techniques including "reverse tunnelling" -- hiding their infrastructure behind legitimate services to make blocking attempts extremely difficult. Some attackers have even compromised real email accounts or hosted fake government login pages on legitimate web services.

"Once a victim clicks a link and enters their details, attackers can gain access to personal or business accounts, leading to data theft, malware installation, or even full-blown ransomware infections," O'Hara said.

The scale and sophistication of the attacks has raised alarm about AI's role in cybercrime, and growing evidence that state-backed hackers are exploiting AI systems for cyberattacks. Anthropic, the company behind the Claude AI model, announced on Thursday that Chinese state-backed hackers used Claude to automate roughly 30 attacks on corporations and governments during a September campaign.

'We're seeing criminals blend old-school social engineering with cutting-edge evasion techniques.'

Garrett O'Hara, Mimecast senior director

In those attacks, up to 80 to 90 per cent of the operation was automated with AI, a level higher than previous hacks. The attacks occurred "literally with the click of a button, and then with minimal human interaction," Anthropic's head of threat intelligence Jacob Klein said.

While attribution of the Australian attacks remains difficult, Mimecast's threat intelligence team believes the criminals are using AI to generate convincing phishing emails and potentially create post-delivery malware.

"It's impossible to say for sure the degree to which AI is currently being used by criminals, but the level of sophistication for attacks is increasing very quickly," O'Hara said. "We're seeing criminals blend old-school social engineering with cutting-edge evasion techniques. They're not slowing down, they're evolving quickly, so AI is almost certainly playing a part."

The targeting of welfare services means vulnerable Australians are disproportionately at risk. "The most vulnerable people in society" are being targeted, O'Hara said. "It's particularly sickening."

Mimecast advises Australians who receive suspicious government emails to avoid clicking links or downloading attachments. Instead, they should navigate directly to myGov through their browser to verify any communications.

The Department of Home Affairs was contacted for comment. Anyone who receives suspicious emails should report them to ScamWatch.gov.au or call the national cybersecurity hotline at 1300 CYBER1 (1300 292 371). Reports can also be made at cyber.gov.au.

Read More
Loading...
AI-generated scenes ‘from France’s 2025 Louvre heist’ circulate in Hong Kong

Incident 1273: Purportedly AI-Generated Fake Videos of Louvre Heist Reportedly Circulated Widely Online

“AI-generated scenes ‘from France’s 2025 Louvre heist’ circulate in Hong Kong”
scmp.com2025-11-17

A daring daytime heist of jewels from the Louvre museum in Paris on October 19 made international headlines.

During the heist, the thieves parked a truck with an extendable ladder under the museum's Apollo Gallery, and used cutting equipment to get through a window and display cases to steal the jewellery.

A Facebook reel featuring footage purportedly of the theft was posted online by a Hong Kong-based user with more than 240,000 followers.

"At around 9.30am, four masked men ... broke into the building from the balcony and took away the jewellery on display," a voice said in Mandarin at the 41-second mark of the video, as two separate clips of masked people breaking glass cases are shown.

A screenshot of different points in the Facebook reel showing the two fabricated videos of the heist. Photo: Facebook/SCMP Composite

A screenshot of different points in the Facebook reel showing the two fabricated videos of the heist. Photo: Facebook/SCMP Composite

However, a digital forensics expert and the Post's own checks show they are not genuine, and likely generated by AI.

The two supposed videos of the heist have also circulated widely online in the aftermath of the robbery, such as on Douyin -- here and here -- and on RedNote here.

Investigation

The Post reached out to AI Forensics, a European non-profit organisation that investigates digital manipulation. It said that the videos contained a "strong indication" of being AI-generated.

"In both cases objects disappear and appear in the hands of the AI-generated thieves, while their hands morph and fingers disappear," researcher Natalia Stanusch told the Post.

Instances of morphed hands (left) and a disappearing crowbar (right) highlighted in the two fabricated videos. Photo: Facebook/Composite via AI Forensics

Instances of morphed hands (left) and a disappearing crowbar (right) highlighted in the two fabricated videos. Photo: Facebook/Composite via AI Forensics

Stanusch also noted both videos contained partially obscured watermarks from OpenAI's Sora, an AI video-generation tool.

Instances of the Sora watermarks in both videos. Photo: Facebook/Composite via AI Forensics

Instances of the Sora watermarks in both videos. Photo: Facebook/Composite via AI Forensics

In the Post's own checks, the scenes were compared to existing visuals of the Apollo Gallery on the Louvre's website.

In the first clip, the appearance of the gallery entrance is notably different from the genuine article.

A comparison of the entrance of the gallery in the fabricated video (left) with an image of the genuine Apollo Gallery (right). Photo: Facebook/Louvre/SCMP Composite

A comparison of the entrance of the gallery in the fabricated video (left) with an image of the genuine Apollo Gallery (right). Photo: Facebook/Louvre/SCMP Composite

In the second clip, the motifs lining the actual gallery's roof and far wall do not match.

A comparison of the roof structure and artwork from the fabricated video (left) and the genuine Louvre gallery (right). Photo: Facebook/Louvre/SCMP Composite

A comparison of the roof structure and artwork from the fabricated video (left) and the genuine Louvre gallery (right). Photo: Facebook/Louvre/SCMP Composite

SCMP Fact-Check has reached out to the Facebook account owner for comment.

Verdict: False

Read More
Loading...
Facebook’s fraud files

Incident 1268: Meta's Automated Ad and Targeting Systems Reportedly Enabled Large-Scale Fraud Revenue

“Facebook’s fraud files”
doctorow.medium.com2025-11-15

A blockbuster Reuters report by Jeff Horwitz analyzes leaked internal documents that reveal that: 10% of Meta's gross revenue comes from ads for fraudulent goods and scams, and; the company knows it, and; they decided not to do anything about it, because; the fines for facilitating this life-destroying fraud are far less than the expected revenue from helping to destroy its users' lives:

https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/

The crux of the enshittification hypothesis is that companies deliberately degrade their products and services to benefit themselves at your expense because they can. An enshittogenic policy environment that rewards cheating, spying and monopolization will inevitably give rise to cheating, spying monopolists:

https://pluralistic.net/2025/09/10/say-their-names/#object-permanence

You couldn't ask for a better example than Reuters' Facebook Fraud Files. The topline description hardly does this scandal justice. Meta's depravity and greed in the face of truly horrifying fraud and scams on its platform is breathtaking.

Here's some details: first, the company's own figures estimate that they are delivering 15 billion scam ads every single day, which generate $7 billion in revenue every year. Despite its own automatic systems flagging the advertisers behind these scams, Meta does not terminate their account --- rather, it charges them more money as a "disincentive." In other words, fraudulent ads are more profitable for Meta than non-scam ads.

Meta's own internal memos also acknowledge that they help scammers automatically target their most vulnerable users: if a user clicks on a scam, the automated ad-targeting system floods that user's feed with more scams. The company knows that the global fraud economy is totally dependent on Meta, with one third of all US scams going through Facebook (in the UK, the figure is 54% of all "payment-related scam losses"). Meta also concludes that it is uniquely hospitable to scammers, with one internal 2025 memo revealing the company's conclusion that "It is easier to advertise scams on Meta platforms than Google."

Internally, Meta has made plans to reduce the fraud on the platform, but the effort is being slow-walked because the company estimates that the most it will ultimately pay in fines worldwide ads up to $1 billion, while it currently books $7 billion/year in revenue from fraud. The memo announcing the anti-fraud effort concludes that scam revenue dwarfs "the cost of any regulatory settlement involving scam ads." Another memo concludes that the company will not take any pro-active measures to fight fraud, and will only fight fraud in response to regulatory action.

Meta's anti-fraud team operates under an internal quota system that limits how many scam ads they are allowed to fight. A Feb 2025 memo states that the anti-fraud team is only allowed to take measures that will reduce ad revenue by 0.15% ($135m) --- even though Meta's own estimate is that scam ads generate $7 billion per year for the company. The manager in charge of the program warns their underlings that "We have specific revenue guardrails."

What does Meta fraud look like? One example cited by Reuters is the company's discovery of a "six-figure network of accounts" that impersonated US military personnel, who attempted to trick other Meta users sending them money. Reuters also describes "a torrent of fake accounts pretending to be celebrities or represent major consumer brands" in order to steal Meta users' money.

Another common form of fraud is "sextortion" scams. That's when someone acquires your nude images and threatens to publish them unless you pay them money and/or perform more sexual acts on camera for them. These scams disproportionately target teenagers and have led to children committing suicide:

https://www.usatoday.com/story/life/health-wellness/2025/02/25/teenage-boys-mental-health-suicide-sextortion-scams/78258882007/

In 2022, a Meta manager sent a memo complaining about a "lack of investment" in fraud-fighting systems. The company had classed this kind of fraud as a "low severity" problem and was deliberately starving enforcement efforts of resources.

This only got worse in the years that followed, when Meta engaged in mass layoffs from the anti-fraud side of the business in order to free up capital to work on perpetrating a different kind of scam --- the mass investor frauds of metaverse and AI:

https://pluralistic.net/2025/05/07/rah-rah-rasputin/#credulous-dolts

These layoffs sometimes led to whole departments being shuttered. For example, in 2023, the entire team that handled "advertiser concerns about brand-rights issues" was fired. Meanwhile, Meta's metaverse and AI divisions were given priority over the company's resources, to the extent that safety teams were ordered to stop making any demanding use of company infrastructure, ordered instead to operate so minimally that they were merely "keeping the lights on."

Those safety teams, meanwhile, were receiving about 10,000 valid fraud reports from users every week, but were --- by their own reckoning --- ignoring or incorrectly rejecting 96% of them. The company responded to this revelation by vowing to reduce the share of valid fraud reports that it ignored to a mere 75% by 2023.

When Meta roundfiles and wontfixes valid fraud reports, Meta users lose everything. Reuters reports out the case of a Canadian air force recruiter whose account was taken over by fraudsters. Despite the victim repeatedly reporting the account takeover to Meta, the company didn't act on any of these reports. The scammers who controlled the account started to impersonate the victim to her trusted contacts, shilling crypto scams, claiming that she had bought land for a dream home with her crypto gains.

While Meta did nothing, the victim's friends lost everything. One colleague, Mike Lavery, was taken for CAD40,000 by the scammers. He told Reuters, "I thought I was talking to a trusted friend who has a really good reputation. Because of that, my guard was down." Four other colleagues were also scammed.

The person whose account had been stolen begged her friends to report the fraud to Meta. They sent hundreds of reports to the company, which ignored them all --- even the ones she got the Royal Canadian Mounted Police to deliver to Meta's Canadian anti-fraud contact.

Meta calls this kind of scam, where scammers impersonate users, "organic," differentiating it from scam ads, where scammers pay to reach potential victims. Meta estimates that it hosts 22 billion "organic" scam pitches per day. These organic scams are actually often permitted by Meta's terms of service: when Singapore police complained to Meta about 146 scam posts, the company concluded that only 23% of these scams violated their Terms of Service. The others were all allowed.

These permissible frauds included "too good to be true" come-ons for 80% discounts on leading fashion brands, offers for fake concert tickets, and fake job listings --- all permitted under Meta's own policies. The internal memos seen by Reuters show Meta's anti-fraud staffers growing quite upset to realize that these scams were not banned on the platform, with one Meta employee writing, "Current policies would not flag this account!"

But even if a fraudster does violate Meta's terms of service, the company will not act. Per Meta's own policies, a "High Value Account" (one that spends a lot on fraudulent ads) has to accrue more than 500 "strikes" (adjudicated violations of Meta policies) before the company will take down the account.

Meta's safety staff grew so frustrated by the company's de facto partnership with the fraudsters that preyed on its users that they created a weekly "Scammiest Scammer" award, given to the advertiser that generated the most complaints that week. But this didn't actually spark action --- Reuters found that 40% of Scammiest Scammers were still operating on the platform six months after being flagged as the company's most prolific fraudster.

This callous disregard for Meta's users isn't the result of a new, sadistic streak in the company's top management. As the whistleblower Sarah Wynn-Williams' memoir Careless People comprehensively demonstrates, the company has always been helmed by awful people who would happily subject you to grotesque tormets to make a buck:

https://pluralistic.net/2025/04/23/zuckerstreisand/#zdgaf

The thing that's changed over time is whether they can make a buck by screwing you over. The company's own internal calculus reveals how this works: they make more money from fraud --- $7 billion/year --- than they will ever have to pay in fines for exposing you to fraud. A fine is a price, and the price is right (for fraud).

The company could reduce fraud, but it's expensive. To lower the amount of fraud, they must spend money on fraud-fighting employees who review automated and user-generated fraud flags, and accept losses from "false positives" --- overblocking ads that look fraudulent, but aren't. Note that these two outcomes are inversely correlated: the more the company spends on human review, the fewer dolphins they'll catch in their tuna nets.

Committing more resources to fraud fighting isn't the same thing as vowing to remove all fraud from the platform. That's likely impossible, and trying to do so would involve invasively intervening in users' personal interactions. But it's not necessary for Meta to sit inside every conversation among friends, trying to decide whether one of them is scamming the others, for the company to investigate and act on user complaints. It's not necessary for Meta to invade your conversations for it to remove prolific and profitable fraudsters without waiting for them to rack up 500 policy violations.

And of course, there is one way that Meta could dramatically reduce fraud: eliminate its privacy-invasive ad-targeting system. The top of the Meta ad-funnel starts with the nonconsensual dossiers Meta has assembled on more than 4 billion people around the world. Scammers pay to access these dossiers, targeting their pitches to users who are most vulnerable.

This is an absolutely foreseeable outcome of deeply, repeatedly violating billions of peoples' human rights by spying on them. Gathering and selling access to all this surveillance data is like amassing a mountain of oily rags so large that you can make billions by processing them into low-grade fuel. This is only profitable if you can get someone else to pay for the inevitable fires:

https://locusmag.com/feature/cory-doctorow-zucks-empire-of-oily-rags/

That's what Meta is doing here: privatizing the gains to be had from spying on us, and socializing the losses we all experience from the inevitable fallout. They are only able to do this, though, because of supine regulators. Here in the USA, Congress hasn't delivered a new consumer privacy law since 1988, when they made it a crime for video-store clerks to disclose your VHS rentals:

https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy

Meta spies on us and then allows predators to use that surveillance to destroy our lives for the same reason that your dog licks its balls: because they can. They are engaged in conduct that is virtually guaranteed by the enshittogenic policy environment, which allows Meta to spy on us without limit and which fines them $1b for making $7b on our misery.

Mark Zuckerberg has always been an awful person, but --- as Sarah Wynn-Williams demonstrates in her book --- he was once careful, worried about the harms he would suffer if he harmed us. Once we took those consequences away, Zuck did exactly what his nature dictated he must: destroyed our lives to increase his own fortune.

Read More
Loading...
Students Relocated After AI Reports Possible Gun At Parkville High

Incident 1267: Omnilert AI Reportedly Triggered False Gun Alert at Parkville High, Prompting Student Relocation

“Students Relocated After AI Reports Possible Gun At Parkville High”
patch.com2025-11-15

PARKVILLE, MD --- Police said they found no threat after an artificial intelligence security system reported a possible gun Friday at a Baltimore County school.

Officers said they responded to Parkville High School, located in the 2600 block of Putty Hill Avenue, around 5 p.m.

Authorities said students were relocated as police searched the school.

The Baltimore County Police Department said normal school activities resumed after the all-clear.

"Out of an abundance of caution, a Baltimore County police supervisor requested a police search of the property. Students were immediately relocated to a safe area where they were supervised," Parkville Principal Maureen Astarita wrote in a message to families Friday night, according to The Baltimore Sun.

The Parkville gun search followed an October Omnilert that caused a stir at Kenwood High School in Essex. Omnilert flagged a football player's bag of chips as a potential gun, prompting officers to handcuff teenage students until they realized there was no threat.

The encounter led officials to call for more human oversight before police respond to Omnilert reports. Baltimore County Public Schools promised annual training on protocols for Omnilert, which only flags potential threats and sends them to humans to verify.

Read More
Loading...
Waymo Was on a Roll in San Francisco. Then One of Its Driverless Cars Killed a Cat.

Incident 1269: Waymo Autonomous Vehicle Reportedly Ran Over and Killed a Cat in San Francisco

“Waymo Was on a Roll in San Francisco. Then One of Its Driverless Cars Killed a Cat.”
nytimes.com2025-11-15

At Delirium, a dive bar in San Francisco's Mission District, the décor is dark, the drinks are strong, and the emotions are raw. The punk rockers and old-school city natives here look tough, but they are in mourning.

Kit Kat used to bar-hop along the block, slinking into Delirium for company and chin rubs. Everybody knew the bodega cat, affectionately calling him the Mayor of 16th Street. Kit Kat was their "dawg," the guys hanging out on the corner said.

But shortly before midnight on Oct. 27, the tabby was run over just outside the bar and left for dead. The culprit?

A robot taxi.

Hundreds of animals are killed by human drivers in San Francisco each year. But the death of a single cat, crushed by the back tire of a Waymo self-driving taxi, has infuriated some residents in the Mission who loved Kit Kat --- and led to consternation among those who resent how automation has encroached on so many parts of society.

"Waymo? Hell, no. I'm terrified of those things," said Margarita Lara, a bartender who loved Kit Kat. "There's so many of them now. They just released them out into our city, and it's unnecessary."

Kit Kat's death has sparked outrage and debate for the past three weeks in San Francisco. A feline shrine quickly emerged. Tempers flared on social media, with some bemoaning the way robot taxis had taken over the city and others wondering why there hadn't been the same level of concern over the San Francisco pedestrians and pets killed by human drivers over the years.

A city supervisor called for state leaders to give residents local control over self-driving taxis. And, this being San Francisco, there are now rival Kit Kat meme coins inspired by the cat's demise.

But all of that is noise at Delirium. Kit Kat was loved there. And now he is gone.

"Kit Kat had star quality," said Lee Ellsworth, wearing a San Francisco 49ers hat and drinking a can of Pabst Blue Ribbon beer.

Before Kit Kat's death made headlines, Waymo was on a roll. The driverless car company owned by Alphabet, the parent company of Google, fully rolled out its San Francisco taxi service in 2024 and now has a fleet of 1,000 vehicles in the Bay Area. It announced an expansion this month with freeway service down the Peninsula and pickups at the airport in San Jose. Waymo expects to serve San Francisco International Airport soon, too.

Just a couple of years ago, the white Jaguars with whirring cameras on top were considered oddities. Passers-by would do double takes when they saw the steering wheel turning with nobody in the driver's seat.

Waymos are now a top tourist attraction, however. Many women find them a safer choice than relying on an Uber or Lyft driven by a man. So many parents have ordered them for their children that some schools can look like Waymo parking lots.

And Grow SF, a moderate political group with ties to the tech industry, found that San Francisco voter support of Waymo had jumped from 44 percent in September 2023 to 67 percent this July.

Still, Kit Kat's death has given new fuel to detractors. They argue that robot taxis steal riders from public transit, eliminate jobs for people, enrich Silicon Valley executives --- and are just plain creepy.

Jackie Fielder, a progressive San Francisco supervisor who represents the Mission District, has been among the most vocal critics. She introduced a city resolution after Kit Kat's death that calls for the state Legislature to let voters decide if driverless cars can operate where they live. (Currently, the state regulates autonomous vehicles in California.)

"A human driver can be held accountable, can hop out, say sorry, can be tracked down by police if it's a hit-and-run," Ms. Fielder said in an interview. "Here, there is no one to hold accountable."

Ms. Fielder has strong ties to labor unions, including the Teamsters, which has fought for more regulation of autonomous vehicles, largely out of concern for members who could eventually lose their own driving jobs in other sectors.

Ms. Fielder has posted videos to social media, showing her walking the streets of the Mission as she discusses Kit Kat.

"We will never forget our sweet Kit Kat," she says in one of them. "The poor thing ... suffered a horrible, horrible, long unaliving."

(The word "unaliving" is used by some social media users to avoid algorithms that suppress videos using words such as "death.")

Memorials have sprung up at Randa's Market, where the owner, Mike Zeidan, took in Kit Kat six years ago to catch mice. The cat hung out on the shop's counter when he wasn't roaming 16th Street. One neighbor used to bring him slices of salmon every day; another texted a photo of Kit Kat to his mother each morning.

On a tree outside hang photos of the cat and a sketch of him with a halo above his head.

"Save a cat," the drawing reads. "Don't ride Waymo!"

Floral bouquets, a stuffed mouse and a Kit Kat candy wrapper round out the memorial.

One tree over is a display of a different sort.

"Waymo killed my toddler's favorite cat," a sign reads. "Human drivers killed 42 people last year." (Actually, according to city data, human drivers killed 43 people in San Francisco last year, including 24 pedestrians, 16 people in cars and three bicyclists. None were killed by Waymos.)

The sign was an attempt to put the cat's death in context, in a walkable city where pedestrians still face peril. In 2014, the city pledged to end traffic fatalities within 10 years, but last year's total was one of the highest on record.

The city does not track how many animals are killed by cars each year, but the number is in the hundreds, according to Deb Campbell, a spokeswoman for Animal Care and Control in San Francisco.

She said the agency's cooler last week contained the bodies of 12 cats thought to have been hit by cars in recent weeks. None of them seemed to have prompted media coverage, shrines or meme coins.

Waymo does not dispute that one of its cars killed Kit Kat. The company released a statement saying that when one of its vehicles was picking up passengers, a cat "darted under our vehicle as it was pulling away."

"We send our deepest sympathies to the cat's owner and the community who knew and loved him," Waymo said in a statement.

Waymo is adamant that its cars are much safer than those driven by humans, reporting 91 percent fewer serious crashes compared to human drivers covering the same number of miles in the same cities. The data was in a company research paper that was peer-reviewed and published in a journal. Waymo also operates full taxi services in Los Angeles and Phoenix and provides rides through a partnership with Uber in Atlanta and Austin, Texas.

Mayor Daniel Lurie of San Francisco has been a big fan. He said earlier this year he would allow Waymo to use Market Street, the city's central thoroughfare, which for five years had been accessible mainly to pedestrians and public transit vehicles. He also defended the autonomous taxis in an interview on Thursday with the tech journalist Kara Swisher after she brought up Kit Kat.

"Waymo is incredibly safe," he said. "It's safer than you or I getting behind a wheel."

Rick Norris, who works at the Roxie Theater in the Mission, said that he liked Waymos and had noticed that they were navigating the city's tricky streets better and better. But he was concerned after he spoke with several people who had witnessed Kit Kat's last moments and recounted how they had tried to stop the Waymo when they saw the cat beneath it.

The car just drove away.

It was at that moment that Sheau-Wha Mou, a Delirium bartender and karaoke host, took her cigarette break. She saw people panicking as they stood on the sidewalk. She rushed over and found Kit Kat suffering, with blood streaming from his mouth.

"I knelt down, and I was talking to him," she recalled. "'What happened? Are you OK?'"

She said she had used the bar's sandwich board sign as a stretcher. A stranger then drove her and Kit Kat to a nearby emergency animal clinic. Mr. Zeidan, the bodega owner, arrived soon after.

An hour later, the veterinarian told Mr. Zeidan that Kit Kat was dead.

Photos of Kit Kat still sit next to the cash register at Randa's Market, alongside the dice and cigarette lighters for sale. Mr. Zeidan said he still misses the mouser that became the block's mayor.

Darrell Smith stopped by the market on Monday, part of a weekly ritual that also involves ordering a mixed plate at the nearby Hawaiian barbecue spot. He missed Kit Kat, he said, but felt that dwelling on the robot car seemed like a waste of time.

"I'm skeptical about those Waymo cars myself," he said. "But A.I. is the future. We can't stop it whether we like it or not."

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – August, September, and October 2025

By Daniel Atherton

2025-11-08

At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...

Read More
The Database in Print

Read about the database on the PAI Blog, Vice News, Venture Beat, Wired, arXiv , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    631
  • 🥈

    Anonymous

    154
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    722
  • 🥈

    Khoa Lam

    230
  • 🥉

    Anonymous

    228
Total Report Contributions
  • 🥇

    Daniel Atherton

    2876
  • 🥈

    Anonymous

    964
  • 🥉

    1

    590
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 353a03d