Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 492: Canadian Parents Tricked out of Thousands Using Their Son's AI Voice

概要: Two Canadian residents were scammed by an anonymous caller who used AI voice synthesis to replicate their son's voice asking them for legal fees, disguising as his lawyer.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
推定: unknownが開発し提供したAIシステムで、Ben Perkin's parents と Perkins familyに影響を与えた

インシデントのステータス

インシデントID
492
レポート数
7
インシデント発生日
2023-01-11
エディタ
Khoa Lam
Applied Taxonomies
MIT

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.3. Fraud, scams, and targeted manipulation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

インシデントレポート

レポートタイムライン

+1
TikTok: @benno56
+1
They thought loved ones were calling for help. It was an AI scam.
+3
A couple in Canada were reportedly scammed out of $21,000 after getting a call from an AI-generated voice pretending to be their son
TikTok: @benno56

TikTok: @benno56

tiktok.com

They thought loved ones were calling for help. It was an AI scam.

They thought loved ones were calling for help. It was an AI scam.

washingtonpost.com

Scammers are using voice-cloning A.I. tools to sound like victims’ relatives in desperate need of financial help. It’s working.

Scammers are using voice-cloning A.I. tools to sound like victims’ relatives in desperate need of financial help. It’s working.

fortune.com

A couple in Canada were reportedly scammed out of $21,000 after getting a call from an AI-generated voice pretending to be their son

A couple in Canada were reportedly scammed out of $21,000 after getting a call from an AI-generated voice pretending to be their son

businessinsider.com

Scammers are using AI voices to steal millions by impersonating loved ones

Scammers are using AI voices to steal millions by impersonating loved ones

androidauthority.com

Mất hàng nghìn USD vì AI giả giọng người thân

Mất hàng nghìn USD vì AI giả giọng người thân

vnexpress.net

Scammers Use Voice Cloning AI to Trick Grandma Into Thinking Grandkid Is in Jail

Scammers Use Voice Cloning AI to Trick Grandma Into Thinking Grandkid Is in Jail

futurism.com

TikTok: @benno56
tiktok.com · 2023

#chatgpt #ai #scam #scammers #voiceai #protect #prevention #awareness #education ai and scammers.

They thought loved ones were calling for help. It was an AI scam.
washingtonpost.com · 2023

The man calling Ruth Card sounded just like her grandson Brandon. So when he said he was in jail, with no wallet or cellphone, and needed cash for bail, Card scrambled to do whatever she could to help.

"It was definitely this feeling of ...…

Scammers are using voice-cloning A.I. tools to sound like victims’ relatives in desperate need of financial help. It’s working.
fortune.com · 2023

You may very well get a call in the near future from a relative in dire need of help, asking you to send them money quickly. And you might be convinced it’s them because, well, you know their voice. 

Artificial intelligence changes that. Ne…

A couple in Canada were reportedly scammed out of $21,000 after getting a call from an AI-generated voice pretending to be their son
businessinsider.com · 2023

A couple in Canada were reportedly scammed out of $21,000 after they received a call from someone claiming to be a lawyer who said their son was in jail for killing a diplomat in a car accident.

Benjamin Perkin told The Washington Post the …

Scammers are using AI voices to steal millions by impersonating loved ones
androidauthority.com · 2023
  • AI voice-generating software is allowing scammers to mimic the voice of loved ones.
  • These impersonations have led to people being scammed out of $11 million over the phone in 2022.
  • The elderly make up a majority of those who are targeted.
  • …
Mất hàng nghìn USD vì AI giả giọng người thân
vnexpress.net · 2023

Cha mẹ của Benjamin Perkin (Canada) nhận được cuộc gọi từ con trai, thực chất là AI giả giọng, nói mình đang bị bắt giam và cần gấp 15.000 USD.

Cơn ác mộng của gia đình Perkin, 39 tuổi, bắt đầu khi một người tự nhận là luật sư gọi cho cha m…

Scammers Use Voice Cloning AI to Trick Grandma Into Thinking Grandkid Is in Jail
futurism.com · 2023

Bail Out

Ruthless scammers are always looking for the next big con, and they might've found it: using AI to imitate your loved ones over the phone.

When a 73-year-old Ruth Card heard what she thought was the voice of her grandson Brandon on…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Wrongfully Accused by an Algorithm

Jan 2020 · 11 レポート
Security Robot Rolls Over Child in Mall

Crime-fighting robot hits, rolls over child at Silicon Valley mall

Jul 2016 · 27 レポート
Skating Rink’s Facial Recognition Cameras Misidentified Black Teenager as Banned Troublemaker

Teen turned away from roller rink after AI wrongly identifies her as banned troublemaker

Jul 2021 · 3 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Wrongfully Accused by an Algorithm

Jan 2020 · 11 レポート
Security Robot Rolls Over Child in Mall

Crime-fighting robot hits, rolls over child at Silicon Valley mall

Jul 2016 · 27 レポート
Skating Rink’s Facial Recognition Cameras Misidentified Black Teenager as Banned Troublemaker

Teen turned away from roller rink after AI wrongly identifies her as banned troublemaker

Jul 2021 · 3 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e