Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3629

関連インシデント

インシデント 63430 Report
Alleged Deepfake CFO Scam Reportedly Costs Multinational Engineering Firm Arup $25 Million

Loading...
Deepfaked video conference call costs company $25M
cybernews.com · 2024

Fraudsters used deepfake technology to arrange a bogus video conference call and elaborately trick a finance worker at a multinational firm into paying out $25 million.

The name of the Hong Kong branch of this multinational company is not specified by the police. But, according to the South China Morning Post, an employee of the firm was fooled after seeing digitally recreated versions of the company's chief financial officer and others in a video call.

At a briefing, Hong Kong police said the unnamed male employee thought that the people on the conference call were real when in fact they were convincing digital replicas. Finally, he paid out approximately $25 million to the scammers.

At first, though, the worker was suspicious. When he received a message allegedly sent by the company's UK-based chief financial officer, he suspected it was a phishing email. But he cast away any doubts after joining the conference call and recognizing his colleagues.

And when the scammers then gave the employee orders to transfer the money to five separate Hong Kong bank accounts, he agreed to do 15 transfers totalling $25 million. The scam was only discovered when the employee later checked with the company's head office, the SCMP said.

Deepfake technology was used to turn publicly available video and other footage of staff members into convincing meeting participants. The entire episode lasted a week.

The case is one of several recent episodes in which fraudsters are believed to have used deepfake technology to modify publicly available video and other footage to cheat people out of money. Hong Kong police said they had made six arrests in connection with such scams.

Most scammers use the technology to make loan applications and bank account registrations in one-on-one video calls but this case is the first of its kind in Hong Kong which also involved a large amount of money.

This sort of fraud might only get worse. For instance, a deepfake of Jennifer Aniston promoting a "MacBook giveaway" recently went viral on YouTube, and the likeness of Taylor Swift was also used earlier this month to promote a fake Le Creuset cookware giveaway on Facebook and TikTok.

Fake sexually explicit images of Swift spread recently on social media as well, highlighting the growing problem of non-consensual deepfake pornography online.

A group of US senators have now introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, legislation that would "hold accountable those responsible for the proliferation of nonconsensual, sexually-explicit deepfake images and videos."

Creators of such images would be subject to civil action lawsuits over digital forgery and entitle the victim with financial damages as relief.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd