Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3803

関連インシデント

インシデント 6583 Report
The Arizona Agenda Produced a Deepfake of Kari Lake Advocating for the Publication Without Her Consent

Loading...
Deepfake video of Kari Lake highlights potential problem in election season
azfamily.com · 2024

PHOENIX (AZFamily) — We’re constantly hearing about advancements in artificial intelligence, which can lead to positive things like advancements in science and medicine.

At the same time, concerns about people using it for nefarious purposes are growing.

Right now, its impact on the election is a hot topic heightened by the release of a deepfake video of Senate candidate Kari Lake, specifically made to show how realistic AI videos can be.

The video begins with this statement: “Subscribe to the Arizona Agenda for hard-hitting real news and a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake...”

A deepfake is a video manipulated by AI to look real.

“I think the Arizona Agenda, the media outlet that put that video into the public space, did everyone a favor,” said Stan Barnes, the President of Copper State Consulting Group.

“This really is a brave new world. We are entering into a very difficult election in 2024, maybe even more difficult by historic skepticism in our government--in election integrity and in candidates.”

The political consultant added that voters are not ready for what’s about to hit them.

“I fear for what it means, in terms of honest campaigns and informed voters. It’s going to be even more difficult to be an educated voter because everywhere you go, the first thing you’re going to say is ‘Is this real? Or is this not real?” said Barnes.

They’re hard to spot, even for a trained person like Subbarao Kambhampati, a professor for the School of Computing and Augmented Intelligence at Arizona State University.

“That sort of brings up this issue of how can you even tell? And one of the interesting things is that in the end, you can’t tell. Yeah, that’s the reality,” said Kambhampati.

He explained there are sometimes inconsistencies in the background to use as clues, which is why many AI-generated videos could have blurred backdrops like in Lakes.

He also said it’s good to know a person’s mannerisms to help spot an authentic video from a fake. However, Kambhampati said it could become nearly impossible to tell the difference in a year or two.

“I tell people that the time for believing your eyes and ears is gone. essentially because AI can generate synthetic media that could be indistinguishable from real media,” he said.

Both Kambhampati and Barnes agree that third-party authentication will be needed to determine whether a video is real or not. They said one source voters and candidates will likely turn to is news media such as print, radio, and TV.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd