Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2788

Loading...
Explained: How AI image, voice generators can be a global security threat
gadgetsnow.com · 2023

Artificial Intelligence (AI) has been the talk of the town since OpenAI launched ChatGPT chatbot. The trend picked up pace when Microsoft announced 'new' Bing with chatbot capabilities. Last two months have seen several reports on AI models that can convert text input into images and imitate voices. While this may seem 'cool', a thorough look into the technology suggests that it can pose a security threat globally. Here we discuss some ways how people may fall into hackers' trap as AI becomes more sophisticated.

Catfishing

There are several texts to AI image generators available online. These can convert words into an image in seconds and if your description is detailed enough, you can get vibrant art that may be used to fool people or catfishing.

Catfishing is a deceptive technique in which a fictional/ fake social identity is created to target not-so-tech-savvy victims. It may be used for financial gains or to compromise a victim's privacy. Among various AI image generators are Qualcomm's Stable Diffusion, which can create an image using text input on a phone offline, DALL-E, DeepAI and Fotor.

Financial scams

As per a BBC report, hackers used AI-generated images of earthquakes in Turkey and Syria to trick people into donating to the 'fake' cause. Scammers channelled donations into their own accounts and wallets by offering links to fake photos of the devastation on social media. The report found that threat actors also posted catchy captions like "Let's help Turkey", "Pray for Turkey" and "Donate for earthquake victims" on their posts.

Disinformation by Deepfakes

Deepfakes have posed dangers for some time now. With technology becoming sophisticated over the years, the risks have only grown. A deepfake is synthetic media in which a person in an existing image or video is replaced with someone else's, such as a politician, an actor or social activist.

In one infamous incident, a fake video depicting Ukrainian President Volodymyr Zelenskyy circulated on social media. In the video, a deepfake of Zelenskyy was seen asking Ukrainians to surrender during Russian attack. There's also a video of 'fake' Morgan Freeman on YouTube.

AI voice tools

Earlier this year, Microsoft released the VALL-E AI tool that can replicate people's voices with a 3-second sample. While it may have several positive use cases, it poses a greater threat because scammers can use it to make spam calls sound real. Politicians or people with a huge social presence can also be impersonated. Furthermore, it may be used in cases where a voice password is needed.

Regulation needed

The use of such AI tools is limited as of now but given the implications, regulation may be needed to reduce such risks. It is also advised that users should tread carefully when interacting with social media posts as they could be scams disguised as social causes.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd