Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
His daughter called him crying, and then another voice got on the phone. Only one of them was real

Incident 1482: Purported AI-Generated Voice Reportedly Impersonated Washington Man's Daughter in $13,000 Extortion Scam

“His daughter called him crying, and then another voice got on the phone. Only one of them was real”Latest Incident Report
spokesman.com2026-05-10

Mark A. Young's daughter called him at his

home on a Monday. The young woman's panicked voice projected fear and pain.

She needed her father. "She was crying and upset," Young said. "She told me, 'Dad, I got in an accident and I'm in trouble. I need help.' " Knowing she had traveled from Spokane to Seattle for a concert, the tone of the familiar voice sparked an immediate defensive reaction from Young, who spent most of his career responding to danger. A male voice then got on the phone, said he was a medic and

told Young, a retired police officer, that his 24-year-old daughter had been involved in a collision. And then a second male voice took over the conversation. That voice continues to haunt Young. That man, who had a Scottish accent, asked questions of Young to confirm he was the driver's father.

"Then his tone changed," Young said. "He first identified himself as a drug dealer." The man, who never gave his name, explained to Young that his daughter had been involved in a collision that had interrupted a drug transaction. She had seen "something she wasn't supposed to see." "He told me because of that, he had taken my daughter as a hostage and he was deciding what to do with her," Young said. "He said he could kill her, make her a prostitute or sell her overseas to sex traders." During the conversation, the voice several times allowed Young to speak with his daughter, who out of a request from the family, The Spokesman-Review agreed not to identify. "There is nothing like a loved one suffering over the phone and there is nothing you can do," Young said. "I thought I was a pretty tough guy. But that almost broke me." But the

voice wasn't that of his daughter. It was a fake, an AI-generated copy.

The perpetrators, investigators later surmised, used a recording of the daughter's voice and a program that so accurately mimicked her speech pattern that it convinced Young to leave his home in the small Whitman County town of Garfield to visit several banks and drive all over the Palouse and Idaho to get money to save her life. The Federal Trade Commission

began warning consumers as early as 2023

of the new type of scam that uses AI to copy the voices of family members. "Scammers ask you to pay or send money in ways that make it hard to get your money back. If the caller says to wire money, send cryptocurrency, or buy gift cards and give them the card numbers and PINs, those could be signs of a scam," according to the FTC news release. In Young's case, he wired

$13,000 and was in the process of getting thousands more when the 30-hour

ordeal finally ended inside of a Pullman bank branch. The calm demeanor of a longtime banker, and quick response by Pullman Police, finally led to

Young's daughter being reached to make sure she was safe and at home in Spokane.

"I almost collapsed," said Young, who cried at learning the truth. "I was just so relieved. It took me days to literally start feeling normal again."

The intensity of the ordeal, he said, cannot be understated, even for a man who worked 26 years as a police officer in Santa Rosa, California, and volunteered to serve as a Marine in Vietnam where he

was shot

in the arm. "You are living every minute like it's your last," Young said of the AI extortion. "You are trying to figure out what you can do. That's a hard feeling for me to grasp, because I've never felt helpless like that."

Chasing the fear

The call came

Monday, March 23. After the initial conversations that convinced Young that his daughter was in danger, he began complying with the demands from the man with the Scottish accent. The man explained to Young that he first needed to transfer money to cover the thousands lost in the drug transaction that his daughter had interrupted

. Young, who worked six years as a journalist before becoming a police officer and

also has authored six books

, said he focused his entire energy on helping his daughter. "I had no doubts that I had spoken to my daughter, and I assumed that she was in his custody," he said. "I did whatever I could to cooperate for the next 30 hours." The voice ordered Young to maintain the telephone call, which prevented him from calling for help or his wife. "He said, 'If I lose you, I'm going to take it out on your daughter.' My training as a cop for 26 years, and being in the Marines, I knew how to keep my emotions under control and handle myself as a dutiful victim," Young said. "But it wasn't easy. "I just played along with him, and he directed me to go to one of two banks that we have." Young first drove to a branch in Garfield and withdrew $5,000. The caller directed him to drive to a Walmart in Pullman where he had Young transfer half the money to someone in Mexico. The caller directed Young to do the same thing at a Walmart in Moscow. He then traveled to the WaFd (formerly known as Washington Federal) Bank branch in Pullman, where he took another $5,000, and drove to Lewiston to transfer the same $2,500 amounts from two different stores there. "I'm on the phone with him the whole time. When I'm traveling, I have to list off all the mile markers on the road so he knows where I'm at," said Young

, who could tell the man on the other line was real, unlike the voice mimicking his daughter. "He let me talk to my daughter a couple more times. In the meantime ... he's trying to get more information from me." The caller asked whether Young, 75, had retired and what he did for his career. Fearing that his daughter had already told the abductor about his past, Young said he told the truth to protect her. The caller then directed Young to travel to Boise, because he said he feared that local law enforcement may be trying to monitor the situation. Young complied, drove hours south and got a hotel room in Boise. Once there, he snuck out of his room, raced down to the hotel lobby and used the hotel phone to call his wife. Katie Young was in Chicago on business.

When Mark Young called, her phone was off and the call went to voicemail. But his wife's voicemail was full, so he was not able to leave a message. "That was a bummer," Young said. He got back to his room just as the voice called Young's cell

phone to check on him. The next day, Young transferred another $2,500 from a store in Boise. But when he tried it again, he learned that his ATM card had been blocked. He told the voice that he could solve the ATM problem by driving back to his bank branch in Pullman. On the drive back north, Young passed several places where he lost cell

phone coverage in the Idaho mountains. During one of those periods, he pulled over and wrote down his wife's phone number and explained his situation on a piece of paper so that the FBI could later tell his wife what was happening in case the situation worsened.

He finally made it back to Pullman the afternoon of Tuesday, March 24. "He led me to believe that he was doing surveillance on me while I went to the bank," Young said. Young walk

ed into the branch, located at 405 E. Main St. in Pullman, and saw someone familiar

-- Alex Navarro, the assistant branch manager. Young handed Navarro a note, pointed at it

and walked into the bank's bathroom.

Adventures in banking

Navarro said he took the note and immediately thought something else was happening. Young "walked in with a purpose. He had this note. I knew something was wrong when he set it down and pointed at it," Navarro said. "The first thing that popped in my mind was, 'This guy is robbing me.' Then he went to the bathroom. I thought, that's weird. "Then I read the note

. I was not expecting that." Young said he fully intended to withdraw $17,000, which is what the voice said it would take to buy his daughter's freedom. Sensing the danger and realizing through Young's gestures that the perpetrator was listening in by telephone, Navarro began calmly explaining the process to get the money.

Navarro then told Young that he had to make a phone call to the ATM company to release his card. "I was just buying time," Navarro said. "I made a phone call to our card department and was talking on the phone. But actually, I was texting the teammates that I have a very serious situation going on and I need someone in the back room." Navarro came up with

another excuse, saying that he had to go in the back to retrieve cash. He used the ruse to meet with a longtime teller. "I told her, 'We have a possible extortion kidnapping.' " She went to another part of the building

and called Pullman Police, located a block away. In the meantime, Young passed another note asking Navarro whether he was going to get his money. Navarro told him no and indicated that police had been alerted. "He was truly panicked," Navarro said. "He was grabbing his forehead. His physical distress was clear. He wrote back in another note, 'They are watching us. They said they are outside the building watching us.' " But Navarro needed to stall for time to allow police to arrive. He solved it with the slow count. "I told him, 'I have to count back the $17,000 to you.' So, we went through a fake transaction," he said. With the voice listening, Navarro made a big show of counting out the money in $100 increments. Because the money didn't exist, Navarro pantomimed placing papers down on the desk to sound like he was stacking cash. Halfway through his counting delay tactic, Officer Shane Emerson entered the branch. Navarro and Young took pains to let the officer know that the perpetrator was listening, and they also passed Emerson a note warning that others could be watching the bank. Emerson, according to his police report, called for backup officers to respond to search the branch's parking areas and other location to see if they could locate conspirators. He then took Young's note, the one

with his wife's phone number

written on it, and left the bank before calling Katie Young. "I advised her what was going on, and she said she was on the phone with (her daughter). She merged the calls," Emerson wrote. The daughter "said she was at her residence in Spokane and was absolutely safe." Emerson returned to the branch to alert Mark Young that his family was safe. "Young was shocked and started to cry," Emerson wrote. "I didn't realize how much I was holding in until I learned my daughter was fine," Young said. Young said he later sent both Emerson and Navarro notes thanking them for how they handled and diffused what he was convinced was a life-or-death situation. Navarro said he's seen a lot during 35 years of banking, but nothing compared to that afternoon. "This is the first time I have ever heard somebody being involved with something like this," Navarro said, "where an AI-generated voice has mimicked a person's voice and they weren't able to tell the difference."

The aftermath

Young followed up with Pullman police and later traveled to Spokane to speak with FBI investigators, but as of yet, has not been able to recover any of the funds he wired from the multiple stores. Katie Young said it's taken her husband a long time to recover from the ordeal. She said she was in the lobby of the hotel in Chicago when Officer Emerson called her. "You never really want to get a call, when you are out of town, from the police department. It's never really good news," she said. "I was instantly at attention." After merging the calls to let Emerson know that her daughter was safe, Katie Young flew to Spokane and drove to Garfield. "I was home the next day. My daughter drove over," she said. "We were all together and happy." She stressed

that she doesn't believe her family was a victim of a scam.

"This is really extortion. It's a very different thing," she said. "And

it's unconscionable. I can't imagine how anyone could possibly do that." They still don't know how those extortionists knew their daughter's voice. Both Katie and Mark Young implored parents to work with family members to come up with a safe word

only they would know, to utter in emergencies like this. It's about the only defense they could imagine in a similar scenario. "They can fabricate it so well now," Mark Young said of his daughter's voice. "I could not tell it wasn't her. To me, it was her voice. I was trying to calm her down while trying to calm this guy down." He noted that throughout the 30-hour ordeal, the man with the Scottish accent called back dozens of times. Many of those calls came from different phone numbers that had area codes from Oregon, Washington and Idaho. After the rush of emotion on learning his daughter was safe, Mark Young left the Pullman branch. When he got to his car, the voice called back one last time. "The bad guy comes on the phone and said, 'You got my money?' I told him, 'No. I know my daughter is fine,' " Young said. "I told him several choice words that I can't repeat." The voice claimed that he had guns and could kill everyone Young loves. "I told him, 'I pray you come looking for me,' and I told him what I would do

," Young said.

Read More
Loading...
Foreign actors exploiting Alberta separatist debate to stoke discord, researchers say

Incident 1481: Purported AI-Generated YouTube Network Reportedly Promoted Alberta Secession and U.S. Annexation Narratives

“Foreign actors exploiting Alberta separatist debate to stoke discord, researchers say”
theglobeandmail.com2026-05-09

Early this year, Canadian researchers who track online influence campaigns and foreign propaganda happened upon something peculiar. Using artificial intelligence to catalogue the activity of Russian websites and social-media accounts known to be spreading disinformation about the invasion of Ukraine, the researchers noticed a new topic suddenly garnering more attention than it ever had from those groups: Alberta.

Between late December and late April, references to Alberta separatism and various related themes, including talk of the province becoming a U.S. state and Canada failing as a country, rose sharply from known Russian content farms. During those four months, Alberta was the focus of 67 items produced and distributed by Pravda Network, nearly five times more than other Canada-related topics.

The researchers say such content, created to inflame the debate in Alberta and undermine national interests, is designed to be pushed online and find footing with like-minded Canadians, then mix into the local conversation through sharing and reposting, "creating a laundering effect in which local grievances are blended with foreign strategic narratives."

In a study being released this week by the Global Centre for Democratic Resilience, Brian McQuinn, co-director of the Centre for Artificial Intelligence, Data, and Conflict at the University of Regina, and Marcus Kolga, director of DisinfoWatch, an organization that studies online influence campaigns, say Alberta's proposed referendum on independence is being targeted by foreign actors seeking to sow discord and undermine Canadian interests.

"We're kind of sleepwalking into this referendum and we are already being targeted a lot more than people realize," Dr. McQuinn said in an interview.

The activities in question, according to the report, range from covert influence campaigns run by countries such as Russia, China and others to foment discord inside the province to public remarks and actions by the Trump administration to encourage Alberta separatists, including meeting with their representatives in Washington. A third category involves online content mills producing AI-generated YouTube videos for profit, stoking and clouding the debate with falsehoods and narratives designed to deepen divisions.

"Foreign adversaries are exploiting the Alberta separatist debate to erode social cohesion, deepen domestic divisions, undermine trust in democratic institutions, and amplify perceptions of political instability that damage investor confidence in Canada," the authors say in the report, which will be presented at a conference in Toronto Wednesday and was provided to The Globe and Mail in advance.

"Canada's cognitive sovereignty -- the ability of Canadians to make political decisions freely, without foreign coercion or manipulation -- is not simply under threat; it is being actively contested by foreign actors seeking to shape Canada's democratic future."

The proposed referendum in Alberta, slated for Oct. 19 if the vote moves forward, has worried researchers in this field, since it is considered fertile ground for malign forces to attempt to influence the outcome or undermine Canadian unity. Dr. McQuinn and Mr. Kolga say that is already happening. The unanswered question so far is what effect these efforts are having, or could have in the months ahead, and at what scale they are being conducted, given that so much of it is hidden, they say.

The examples are not limited to this week's report.

Last fall, researchers working for Insikt Group, the research arm of Massachusetts-based cybersecurity firm Recorded Future, also turned up something peculiar. While investigating a covert Russian network called CopyCop, also known as Storm-1516, which has been accused of spreading online disinformation designed to foment divisions in the West, analysts at Insikt came across an unusually specific website about Alberta.

On the surface, albertaseparatist.com and its associated Instagram and YouTube accounts looked like a grassroots campaign by aggrieved Albertans seeking to mobilize support for the proposed referendum on independence.

However, Insikt alleges the site is one of hundreds operated by Storm-1516, an offshoot of Russia's Internet Research Agency, a St. Petersburg cyberstrategy unit that U.S. authorities identified as interfering in the 2016 presidential election.

According to publicly available records, the website is registered under the name James Williams of Delta, B.C. However, The Globe found the address listed doesn't exist in Delta, and the corresponding phone number is incorrect.

In a threat analysis the firm issued on Russia in September, Insikt says its analysis of the tactics, techniques and procedures used by the Russian unit indicates the website is linked to CopyCop.

The operation "is almost certainly attempting to capitalize on growing pro-independence sentiment in the Canadian province of Alberta and exacerbate domestic polarization in Canadian politics amid calls for an independence referendum," the threat analysis said. Insikt did not respond to requests for comment.

Cipher AND AI

The suspected influence campaign tracked this year by Canadian researchers was uncovered using Cipher, an artificial-intelligence system Dr. McQuinn developed with University of Alberta computer scientist Matthew Taylor.

Cipher automates the gathering and cataloguing of online content and identifies developing narratives, allowing faster identification of potential disinformation campaigns.

In 2023, the Canadian researchers used Cipher to study Russia's attempts to undermine support for Ukraine in Canada. What they found was that even before the Russian military invasion was launched, talk of corruption in Ukraine, discord in the North Atlantic Treaty Organization, and questions about Ottawa's support for the country began to proliferate online, and were targeted at Canada.

The 2023 study determined a collection of about 200,000 Russian-linked accounts on Twitter, now known as X, including a core group of about 90 pro-Kremlin accounts, were tailoring narratives specifically for Canadian audiences and had developed an "outsized influence" among Canadian social-media users. Once the information made it into the Canadian online discussion, it often lost its provenance. Dr. McQuinn said as much as 83 per cent of the "ecosystem" spreading what the researchers determined was foreign propaganda, through sharing and reposting, was comprised of Canadians who couldn't tell the difference or weren't concerned where the narratives came from.

"They were shaping the environment in a really significant way," Dr. McQuinn said.

Drawing on two years' worth of data, the report said the influence campaigns were also politically indiscriminate, targeting either end of the political spectrum in Canada, seeing both as potentially useful.

The work continued into this year, when the uptick in Alberta content was detected. The researchers said the 67 items related to Alberta separation in a four-month span stood in contrast to 14 mentions of Ontario, of which eight sought to highlight U.S. President Donald Trump's threats to close down a bridge linking the province to Michigan.

The activity spiked in January, around the time that Alberta separatist leaders spoke publicly about meeting with Trump administration officials in Washington, which the authors also see as a form of external influence.

"In the Pravda Network, it just kept popping up," Mr. Kolga said. "Clearly they were monitoring the information space, and then they started pouring fuel onto that."

It wasn't the first time this had been spotted.

"As early as 2019, the Russian state media platform Sputnik took an interest in the fringe Western separatist movement known as Wexit," the report says. "This was at a time when it had little meaningful public support, likely inflating its perceived legitimacy, emboldening its organizers, and signalling that Moscow was paying attention."

The campaigns targeting Alberta contain four primary themes, according to the report. First, they push the idea that separatist sentiment in the province is growing; second, they amplify and sometimes distort long-held grievances between Alberta and Ottawa, arguing that the local population is being exploited and the path to prosperity involves breaking away to the United States; third, they put forward the notion that Alberta has strong international support for separation; and fourth, they mix falsehoods and inaccuracies with true news items to lend legitimacy to propaganda-based content.

The researchers have since trained Cipher to zero in specifically on suspected Alberta influence campaigns from Russia, the U.S. and other countries, and have begun tracking disinformation targeted at the referendum. The first data set from that work is expected in a month or two.

There is a risk of affecting the polls, the report says.

Support inside Alberta for independence has topped out at slightly less than 30 per cent in polling in the past year, though those numbers drop when people are asked whether they would still support separation if the costs were significant. The higher number "sits within the range where historical precedent shows dramatic shifts are possible in a short period of time," the report says.

British support for Brexit was between 40 per cent and 47 per cent six months before the 2016 vote, the authors say, while those backing separation in Quebec in 1995 jumped to 50 per cent from 39 per cent as the vote approached. Support for Scottish independence in 2014 similarly rose to 45 per cent from about 30 per cent in the final months of the campaign, the report says.

Foreign-influence campaigns are harder to pick out than people think, Dr. McQuinn said.

"Really good disinformation starts with something you nod your head to," he said. "That's how you get people passing something on without even looking at what's actually in it."

In the case of the suspected disinformation detected in January, Mr. Kolga said one of the concerns is that the content is being used to train AI systems known as large language models, so as to inject misinformation into AI searches about Alberta separation and other topics.

"Just by flooding it with information that supports their positions, the hope is to manipulate the responses that these platforms are giving," Mr. Kolga said.

While the kind of foreign-influence campaigns the Cipher software is designed to detect are often murky and covert, others exist out in the open. Mr. Kolga and Dr. McQuinn consider statements and actions by the Trump administration in recent months, along with the stoking of Alberta separatism by U.S. MAGA influencers on podcasts and other platforms, to be similar in nature.

"US involvement in Alberta separatism is not covert -- it is overt," the report says.

"By overt, we are referring to official engagement, where senior US government figures have met directly with Alberta separatist leaders and made public statements validating their cause, while the US is led by a president who has repeatedly expressed interest in annexing Canada."

For countries wanting to destabilize Canadian interests, particularly at a time when Canada is negotiating trade agreements, the separatist debate provides fertile ground, the researchers say.

If the referendum on separation goes forward, the report highlights three risks of disinformation that can be expected to emerge prior to the vote, and in the aftermath.

The proposed referendum is facing a First Nations legal challenge arguing that any effort to separate violates treaty agreements. As the courts decide whether the referendum can proceed, and as Elections Alberta validates signatures gathered on a petition to hold the vote, the report says influence campaigns will likely target the legitimacy of the referendum, stoking discord on both sides.

"Narratives may claim that valid signatures were secretly rejected, that authorities are concealing public support, or that courts have 'cancelled' a referendum. Fabricated screenshots, documents, or statements may be used to inflame distrust," the report warns.

The writers expect the lead-up to the vote would also be targeted with disinformation.

"Narratives may focus on voter eligibility, ballot counting, non-citizen voting, foreign funding, misleading interpretations of referendum rules, and false claims that a referendum would automatically produce independence," the authors write. "Other narratives may portray separation supporters as persecuted or targeted by state authorities, creating a potential pretext for foreign actors to justify intervention."

After the vote, the authors expect to see campaigns attempting to delegitimize the outcome "through claims of fraud, hacking, forged documents, foreign recognition, or federal obstruction."

'Urgent threat'

In 2024, the U.S. Department of Justice announced the seizure of 32 internet domains it alleged were Kremlin-directed sites built to covertly spread Russian propaganda and influence U.S. politics, including that year's presidential election.

"Companies operating at the direction of the Russian government created websites to trick Americans into unwittingly consuming Russian propaganda," the DOJ said in announcing the seizures.

As part of the investigation, the FBI obtained three internal Russian documents that detailed the playbook for running influence campaigns, which were filed as exhibits in the case.

One alleged Russian planning document, titled The Good Old USA Project, describes how online communities are built in advance, kept in a "sleeping state" as they organically grow their audiences in a target community, and are later mobilized.

"At the right moment, upon gaining momentum, these communities become an important instrument of influencing the public opinion in critically important states," the document says, distributing "bogus stories disguised as newsworthy events."

Another document filed as evidence in the case, titled US Social Media Influencers Network, outlines methods for creating online influence.

"Active accounts in each state will be maintained on behalf of a fictitious individual, who actively supports the U.S. Political Party A and represents 'a community of local activists.'

"In order to eliminate the possibility of detection of the 'Russian footprint' in the proposed project, a multi-level protection of the infrastructure will be built. It will contain VPN services, physical servers located in the United States, etc."

The potential sophistication of such foreign campaigns has the Canadian government concerned.

Last week, the Senate standing committee on national security, defence and veterans affairs produced a report examining the issue, titled Russia's Disinformation: Understanding the Challenge, Strengthening Canada's Response.

"The committee is convinced that Russia's disinformation poses an urgent threat to Canada's national security, democratic institutions and social cohesion," the Senate committee report says.

"The Government of Canada has been making efforts to address disinformation. However, the extent of Russia's disinformation exceeds Canada's current capacity to address it effectively."

Though the Senate committee report focuses on Russia, the threat is from multiple state and independent actors, Mr. Kolga and Dr. McQuinn say.

"Foreign adversaries systematically exploit these vulnerabilities, moving at algorithmic speed, while Canada's institutional response remains slower, fragmented, and often reactive. Monitoring alone is therefore insufficient. Canada needs better sequencing between early detection, risk assessment, public communication, and institutional response," the authors say in their report.

Dr. McQuinn said he is particularly concerned about Elections Alberta in the face of the referendum.

"They have a tough task," Dr. McQuinn said. "Their organizational capacity is relatively limited."

Michelle Gurney, a spokesperson for Elections Alberta, said the agency is concerned, and has made changes in an attempt to address the threat of online disinformation.

The agency has created an Information Integrity Unit, "specifically focused on all forms of deepfakes, misinformation, disinformation, and other nefarious online activities, both foreign and domestic," Ms. Gurney said.

"We are in the process of standing this team up and procuring an expansion on our use of industry leading digital media analysis, monitoring, and listening software and systems."

The changes include legislation, recently passed, that gives the agency some power to address suspected disinformation campaigns or deepfakes, which are phony videos that appear real.

Prior to the new legislation, "Elections Alberta did not have any legislative authority to change or ask for the removal of posts of this nature," Ms. Gurney said in an e-mail.

Public Safety Canada spokesperson Margo Boyle said foreign-interference threats are a priority for the federal government, but did not say specifically what work is being done in advance of the proposed Alberta referendum.

"When credible information suggests that a foreign state or foreign linked entity may be attempting to interfere in political processes in Canada, federal agencies do have the authority to assess, investigate, and act within their respective mandates," Ms. Boyle said in an e-mail.

The authors of the report say Canadians have a right to debate regional grievances and federalism, which are part of a democratic process.

"The danger is not the existence of that debate. The danger is that foreign governments, state-aligned media, ideological networks, and profit-driven manipulation systems are seeking to distort it," the report concludes.

"When external actors amplify separatist narratives, normalize annexation, encourage national rupture, or undermine confidence in democratic processes, the issue is no longer only a matter of provincial politics. It becomes a direct threat to Canada's democratic integrity, national security, and cognitive sovereignty."

Read More
Loading...
Woman Guilty Of ChatGPT Rape Hoax

Incident 1479: ChatGPT-Generated Image of Nonexistent 'Homeless Man' Was Used in False St. Petersburg, Florida Burglary and Sexual Battery Reports

“Woman Guilty Of ChatGPT Rape Hoax”
thesmokinggun.com2026-05-06

DECEMBER 2--The Florida Woman who gave police an AI-generated photo of a homeless man who purportedly barged into her residence and raped her has been convicted of falsely reporting a crime, a sham prompted by a "TikTok challenge" and her desire for attention, records show.

Brooke Schinault, 32, yesterday pleaded no contest to the hoax and was adjudged guilty of the misdemeanor. She was fined and placed on probation.

In October, Schinault called 911 to report that a male intruder forced his way into her St. Petersburg apartment and committed the sexual assault while her infant slept in another room.

Schinault claimed to have taken a picture (seen at right) of the man while he sat on her sofa. Police provided TSG with the photo in response to a public records request.

The reported burglary/battery prompted a police response that included seven officers, a detective, six rescue personnel, and a forensic technician who gathered DNA samples.

Schinault's story fell apart when Det. Dagni Closser examined the photo of the purported suspect, a hoodie-wearing white male in his 30s. "As I am familiar with many social media sites and trends, I recognized the male as a TikTok challenge that is viral," Closser reported. The "AI homeless man challenge," Closser noted, involved the uploading of a residential photo to an AI application, which then inserts a photo of a homeless man into the image.

"Most place the subject on their couch and then screenshot the result and send to loved ones in a prank that places concern that an unknown subject is in their home," said Closser, who reported finding "several TikTok reels of this challenge" that used the same photo of the homeless man.

Confronted by police, Schinault . . . initially claimed she only used AI to sharpen a blurry photo of her attacker. She eventually confessed, saying that she was dealing with depression, was not sleeping, and "wanted attention."

The hoax image, Schinault said, was made by using "Google and ChatGPT." To create the photo, cops reported, "Brooke typed 'a homeless man.'"

Read More
Loading...
Florida family warns of AI scam targeting owners of missing pets

Incident 1478: Scammers Reportedly Used AI-Generated Images of Missing Dog Archer to Solicit Fraudulent Vet Payment from Deltona, Florida Family

“Florida family warns of AI scam targeting owners of missing pets”
fox13news.com2026-05-06

DELTONA, Fla. - A Florida family is sharing their story to warn other pet owners after scammers used Artificial Intelligence to orchestrate a scam involving their missing dog.

Bill Cosens said the ordeal began Sunday morning shortly after his beagle mix, Archer, escaped from his backyard. After posting about his missing dog on social media, Cosens received a call from a woman claiming she had found the dog.

The caller told Cosens that Archer had been hit by a vehicle and required $2,800 for immediate emergency surgery. The scammer sent photos that appeared to show Archer on an operating table using pictures Cosens posted to social media. The pictures were fake and AI-generated.

Generated AI Photos

What they're saying:

 "When we start thinking logically and stepping back from our emotions, then it starts becoming pretty obvious that we're being scammed," Cosens said. "And now we go from being upset and distraught to being pretty ticked off."

Cosens grew suspicious when he researched the address provided for the veterinary clinic by the scammer and discovered it was the address to the nearby city hall. He luckily didn't send any money.

Cosens said a neighbor witnessed a woman putting Archer into a vehicle and driving away. The neighbor was eventually able to track the woman down and return the dog safely to Cosens.

A Growing Trend:

The Volusia County Sheriff's Office is currently investigating the incident. This case mirrors a similar report from St. Petersburg earlier this year, where another pet owner was targeted by a nearly identical scheme. In that instance, the victim's bank flagged the suspicious transaction before the funds were transferred.

Experts say these scammers often "scrape" photos from social media lost-and-found posts and use AI tools to add fake injuries or medical equipment to the images.

Cosens advises anyone in a similar situation to remain vigilant despite the emotional toll of a lost pet.

 "The first biggest thing to avoid being scammed, I would say, is if they're not willing to meet you in person and get payment in person, they're scamming you," Cosens said.

The Source: This article was written using information from two different pet owners who were scammed this year, and reporting from FOX 13's Kailey Tracy.

Read More
Loading...
Did FBI Director Kash Patel use AI to rip off the Beastie Boys?

Incident 1480: Purportedly AI-Recreated Clips from Beastie Boys' 'Sabotage' Video Reportedly Appeared in FBI Promotional Video Posted by Kash Patel

“Did FBI Director Kash Patel use AI to rip off the Beastie Boys?”
npr.org2026-05-06

A promotional video for the FBI posted by director Kash Patel appears to have used AI to generate short clips nearly identical to those in the Beastie Boys' iconic music video for their 1994 classic song "Sabotage."

Patel released the video on Monday on X, in a post about the FBI's effort to combat "massive fraud." The roughly-two minute video used the instrumental version of the song "Sabotage," and footage nearly identical to the original music video, interspersed with what appeared to be authentic footage of FBI agents conducting their work.

By Tuesday it had received roughly half-a-million views.

An analysis by NPR shows at least six clips in the FBI video were frame-by-frame recreations of shots in the iconic "Sabotage" music video, which was directed by Spike Jonze. The clips featured vehicles, people and buildings that were incredibly similar to the original video, but with small differences that would likely be generated by AI.

For example, in one shot where a car is spinning out, grilles are clearly visible in some of the windows in the original footage, but they are missing in the FBI version of the clip. Another shot shows an individual with a megaphone jumping from roof-to-roof with telephone lines in the background. The lines and dirt on the building all align identically to the 1994 video, which was filmed over 30 years ago. In one frame, one of the telephone lines appears to go through the head of the character: the sort of flaw that can be common in AI video generation.

Representatives for Spike Jonze and the Beastie Boys did not respond to NPR's request for comment. The FBI also did not respond to NPR's request for more information about the video and how it was made.

Independent experts who reviewed the video for NPR agreed that the clips were likely generated by AI.

"It does seem like it would be highly likely to be AI," Kolina Koltai, a researcher with the online investigations group Bellingcat told NPR in an email. "You can even see some of the AI errors."

For example, Koltai says there's a moment of characteristic AI-generated artifacts that appear in the "No Fraud" license plate on the FBI car in the opening shot.

The clips were likely created by taking screenshots or short clips from the original "Sabotage" music video and feeding them into an image-to-video model, Hany Farid, a professor at the University of California, Berkeley who specializes in the analysis of digital images, wrote to NPR in an email. It's also possible that the AI model generated the video clips itself because the original music video was in its training data -- though Farid believes that's less likely.

In any event, Farid believes AI was involved: "The similarities are hard to explain otherwise," he wrote.

In President Trump's second term, members of his administration have enthusiastically co-opted popular music, movies and memes as a way of spreading their message, even when artists have protested.

Using AI has also been a common tactic. Notably, last October President Trump himself posted an AI-generated video of himself dumping brown fluid on protestors at a "No Kings" rally set to the song "Danger Zone" by Kenny Loggins. Loggins demanded the video be taken down. It remains up on Trump's Truth Social account.

In January, the White House posted an AI-doctored image of a protestor in Minneapolis who had been arrested by federal authorities without labeling that the image had been manipulated.

Patel, who was born in Long Island in 1980, would have been in junior high or high school when the Beastie Boys released "Sabotage."

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – February, March, and April 2026

By Daniel Atherton

2026-05-05

Lisière de la forêt de Fontainebleau, Alfred Sisley, 1865 🗄 Trending in the AIID For this roundup, I'll be surveying the new incident IDs t...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists, Stanford AI Index, Rolling Stone, the Guardian, Harvard Business Review, Brasil em Folhas, Newsweek, and other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoNewsweek logoTime logoBulletin of the Atomic Scientists logoStanford HAI logoRolling Stone logoThe Guardian logoHarvard Business Review logoBrasil em Folhas logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    795
  • 🥈

    Anonymous

    157
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    901
  • 🥈

    Anonymous

    243
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    3039
  • 🥈

    Anonymous

    934
  • 🥉

    Khoa Lam

    417
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2026 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1bda497