Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3060

関連インシデント

インシデント 54317 Report
Purported AI-Generated Image of Explosion Near Pentagon Reportedly Triggers Brief Market Dip and Public Confusion

Loading...
AI 'Pentagon blast' image shows we are not prepared for deepfakes, warns expert
inews.co.uk · 2023

Media, technology and financial industries are not prepared for an onslaught of highly convincing fake images which could crash stock markets and spark political unrest, an artificial intelligence expert says.

An AI-generated image of an explosion next to what was claimed to be a building in the Pentagon was shared online on Monday, after being released on Twitter by a user posing as news organisation Bloomberg.

As it circulated, the S&P 500 stock exchange fell by about 0.3 per cent, to a session low, before rebounding when it was confirmed fake. RT, the Russian state-controlled TV news channel, tweeted the fake image then deleted the tweet shortly afterwareds.

Henry Ajder, an expert on deepfake images, told i the image was the "first really significant one to essentially spook the markets".

Even something that was discovered to be false within a short timeframe could have a damaging impact. In the hands of bad actors, these images could be used to crash the market or make the stock price soar for a favourable return, said Mr Ajder, who presents BBC Radio 4's The Future Will Be Synthesised.

"Moving forward, it's just not going to be possible for even well-trained eyes in a lot of cases, from the image alone, to spot a fake."

The photo was posted by a user under the name of "Bloomberg Feed" -- which had a blue tick next to it, meaning the creators paid the required $8 per month to Twitter for a "verified" account.

The real *Bloomberg *news site actually has a gold tick beside it -- which mean it is the official account of a business of other private organisation. However, judging whether an account is genuine would require the public to kow what a gold tick is -- calling into question Twitter's current verification system, Mr Ajder said.

An explosion at the Pentagon would be easy to quickly verify on the ground, but that might not be the case for an explosion for a village in Ukraine, he added, and the rapid creation and increasing sophistication of such images posed a challenge for those looking for news and for journalists seeking to verify events in a fast-paced environment.

"In a world where you can generate any kind of content in a realistic fashion with AI at scale and within minutes, you could create 200 different images of this fire from different perspectives," said Mr Ajder. "I think it really speaks to how our information infrastructure and media infrastructure is not prepared for this AI-generated content."

Last week, Open AI chief executive Sam Altman testified before US congress, agreeing with the need for AI regulation and saying, "I think if this technology goes wrong, it can go quite wrong," The New York Times reported.

Mr Ajder said the current approach to AI risks was not fair as it is focused on how ordinary people ca spot fakes, rather than on demanding regulation from technology companies or spending more money on content moderation. News organisations and technology companies weren't "prepared" for what might come, he added.

"It's a perfect storm where traditional news media takes time to verify, the algorithmic nature of platform rewards controversial and sensational content and the platforms don't have much in the way of strong content moderation to filter out these fakes," he said.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd