Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2296

AI and deepfakes present new risks for internet relationships
nationaltribune.com.au · 2022

People looking for genuine relationships via the internet will need to become a lot more savvy about new technologies which expose them to romance fraud at an "entirely new level of risk", warns QUT internet fraud researcher Associate Professor Cassandra Cross.

Romance scams, which netted criminals $131 million from Australians in 2020 alone, are likely to get harder to detect as perpetrators are moving into using AI (artificial intelligence) and deepfake technology to deceive their victims.

With deepfake technology, offenders can not only use readily available images of another person and create a fake profile image, but they can also create an entirely synthetic image and corresponding profile without having to lift one from another person's social media or the internet.

Professor Cross, from QUT School of Justice, said the public had become aware of romance scams and had been advised to use detection methods such as a reverse image search on images sent to them in order to determine if the sender was who they claimed to be.

"This means fraud prevention campaigns will need to be revised to make people aware of this sophisticated and not easily detected method of deception," Professor Cross said.

"AI and deepfakes render reverse image searches virtually obsolete. There is a critical need to improve the technological response to AI and the fraud potential of deepfakes as there are limited means of detection available.

Professor Cross said many media reports had already documented the use of AI and deepfakes to deceive romance fraud victims.

"For example, a US woman sent multiple sums of money to a man she believed was a US Navy official. In this case, the offender manipulated publicly available images and recordings of a US Naval officer to create deepfake videos of the genuine person under a different identity," she said.

"As well as enabling offenders to lift images of living people and create realistic profiles, deepfake software can create an entirely believable but fictitious image and profile.

"The ability of technology to generate unique images for use in social media profiles has the potential to change how romance fraud and other deceptive online practices (such as catfishing) are perpetrated."

Professor Cross said deepfakes had evolved to encompass voice recordings and were a powerful tool when used with other fraudulent behaviours.

"For example, in business email compromise (BEC) fraud, offenders can use fake audio recordings to impersonate CEOs and other authoritative figures and target lower-level employees with urgent requests to transfer funds on their behalf.

"Clearly, prevention messaging needs to be targeted at the people who use dating platforms and social media platforms to form relationships and at the very least alert the public to the risks AI and deepfakes pose in these areas of their lives.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c