Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2848

Associated Incidents

Incident 4927 Report
Canadian Parents Tricked out of Thousands Using Their Son's AI Voice

Loading...
Losing thousands of dollars because AI fakes the voice of a loved one
vnexpress.net · 2023

Parents of Benjamin Perkin (Canada) received a call from their son, which was actually a fake AI, saying he was being held in custody and needed $ 15,000 urgently.

Perkin's family's nightmare, 39, began when a man claiming to be a lawyer called his parents to say he had caused a car crash that killed an American diplomat. The person said Perkin was in prison and needed money for legal costs.

To increase reliability, this person passes the machine to Perkin, which is actually connected to the voice-spoofing AI device. In it, "Perkin" said that he really needed money and could only trust his parents. "The voice was close enough that my parents believed it was me," Perkin told the Washington Post.

A few hours later, the "lawyer" urged his parents to transfer the money, so they went to the bank to withdraw and send $ 15,449 through a conversion system to Bitcoin. His parents said they had a feeling the call was "something out of the ordinary", but did so because they thought they had spoken to their son. That night, when Perkin called, it all came to light.

Perkin suspects the videos he posts on YouTube are the audio source for the AI-training scammer. "Money lost. No insurance. It's impossible to get it back," he said.

Similarly, one morning, Mrs. Ruth Card, 73 years old in Regina (Canada), received a call from a stranger. This person said her nephew, Mr. Brandon, was detained, did not have a smartphone to contact and needed some money to be released.

"It just flashed through my mind at that moment that I had to help him immediately," she told the Washington Post.

She and her husband went to the bank and withdrew $2,207 - the maximum she could withdraw every day. Both intend to go to the second bank to get the same amount. Fortunately for them, the bank manager called both of them into the office and said: another customer also received the same call, also faked the voice of a relative with "strange accuracy". The two called their nephew and it was true that he was not arrested.

"We got caught up in the story without looking into it. At that time, I was sure I was talking to Brandon without suspicion," Ms. Card said.

Tech scams have skyrocketed in recent years, but the story of Ms. Card or Perkin shows a worrying new trend: crooks are taking advantage of voice-mimicking AI for the purpose of defrauding money. This technology is becoming cheaper and easier to access, causing an increasing number of victims, mainly targeting the elderly.

According to data from the US Federal Trade Commission (FTC), in 2022, impersonation is the second most common form of fraud in the US with more than 36,000 reports. Fraudsters often impersonate friends or family to trick their victims. Phone fraud alone accounted for more than 5,100 cases, causing more than 11 million USD in damage.

The advancement of AI over time has developed in many areas, but it is also a tool for bad guys to exploit. With just a sample of the sounds in a few sentences collected, crooks can use artificial intelligence to turn into a copy of a person's voice. The tool then "says" whatever it asks and becomes the vehicle for fraud.

Experts estimate that AI tools to fake voice are rampant, but regulatory agencies are still struggling to control. Meanwhile, it is difficult for most victims to identify the culprit because scammers operate around the world. The companies that create AI are not yet responsible for their abuse.

"It's scary. Everything creates a storm, sending victims into chaos," said professor Hany Farid at the University of California. "The crooks will force the victim to react quickly, making them not calm enough to handle the problem, especially when hearing that a loved one is in danger."

According to him, software AI today is smart enough to analyze a person's voice. "Just one recording from Facebook, TikTok, your voice will be transcribed in just 30 seconds," Professor Farid said.

ElevenLabs, the company behind VoiceLab - the AI tool that reproduces voices, warns that more and more voice imitation software is on the market, leading to abuse.

Meanwhile, Will Maxson, now with the FTC, said tracking down voice scammers is "particularly difficult" because they can be on their phones and anywhere. According to him, if you receive a call from a stranger or from a loved one asking for support, users need to check by calling the person who is having the problem, as well as calling other family members to verify the information. believe.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd