Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3099

関連インシデント

インシデント 54317 Report
Purported AI-Generated Image of Explosion Near Pentagon Reportedly Triggers Brief Market Dip and Public Confusion

Loading...
Twitter to expand crowd-sourced fact-checking tool after Pentagon explosion deep fake
siliconangle.com · 2023

After a fake image of the Pentagon on fire went viral recently, Twitter Inc. has decided that the platform needs to better arm itself against such deep fake skullduggery.

The artificial intelligence-generated image was seen by enough people to briefly upset the stock market. Some news websites reported about the incident and later apologized when it became known that there had definitely been no explosion at the Pentagon. It was seen and apparently believed by many, many people.

The whole debacle was a testament to how realistic AI can create phony events and how such events can, in some way, even if just for a short time, rock the world. There has rightly been a lot of talk in the last few years about the danger of deep fake technology in the age of information warfare. Recently, an image of the Pope dressed like a rap star went viral, which, while harmless, was so well done that one can only imagine the chaos this technology will cause now and in the near future.

Twitter announced today that it’s adding a new feature that may help. There’s little chance such images will never appear on the platform again, but Twitter said one way of dealing with deep fakes is to expand its Community Notes tool. Up until now, Community Notes has been available to members of the public – Twitter has no input – so they can add context to misleading tweets. For regular Twitter users, the feature has been very useful, considering all the inflammatory tweets that don’t explain the context. Twitter will now apply this bullshit detector to images.

“From AI-generated images to manipulated videos, it’s common to come across misleading media,” the company explained. “Today we’re piloting a feature that puts a superpower into contributors’ hands: Notes on Media.”

When users believe that an image is potentially misleading, they will now be able to click on the “About the image” tab. Users will then be able to write additional information specifically related to the image, and that new information will appear below matching images sent throughout the platform.

“It’s currently intended to err on the side of precision when matching images, which means it likely won’t match every image that looks like a match to you,” the company explained. “We will work to tune this to expand coverage while avoiding erroneous matches.” It also said that at some point soon, the same feature will be available for multiple images and videos. There’s little doubt about the image button is going to get worn out very quickly.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd