Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 673

関連インシデント

インシデント 3929 Report
Deepfake Obama Introduction of Deepfakes

Loading...
AIs created our fake video dystopia but now they could help fix it
wired.co.uk · 2018

Rami Niemi

“President Trump is a complete and total dipshit.” So announced Barack Obama, in a video released on YouTube earlier this year. Uncharacteristic, certainly, but it appeared very real. It was, however, a falsified video made — by BuzzFeed and the actor and director Jordan Peele — with the help of artificial intelligence. A neat way of drawing attention to a rapidly maturing problem.

Deepfakes, as they have been dubbed, are the most recent — and perhaps most troubling — manifestation in the evolving arms race of digital disinformation. Images have long been doctored, and methods to fiddle with audio are improving, too. Until recently, manipulating and forging video has been painstaking, requiring expert skills and a trove of patience. However, machine learning is increasingly facilitating and accelerating the process.

Late last year, a new breed of pornographic video began appearing on Reddit, courtesy of a user named deepfakes. Using machine-learning, deepfakes had figured out how to swap out the faces of porn stars with those of celebrities. The videos caused a bit of a stir. The DeepFake algorithm was subsequently released on GitHub, giving anyone with sufficient knowhow and a decent enough computer the means to make pretty decent fakeries.

Read next Google's reCAPTCHA test has been tricked by artificial intelligence Google's reCAPTCHA test has been tricked by artificial intelligence

Since then, similarly falsified videos and related software have been popping up all over the internet. Some are relatively harmless. One tool — inspired by deepfakes’ original algorithm — has been used mostly to insert Nicolas Cage’s face into films he didn’t appear in. But there is clearly a malign potential. It’s easily conceivable that a well-faked video could heighten geopolitical tensions, spark unrest or intensify crime. Trust could be quickly eroded in institutions, media and even political systems. A viable concern is that technological evolution outpaces the development of appropriate government policies.

Thankfully, the scientific community is on the case. One team, led by Siwei Lyu at the University of Albany, New York, has found a flaw in the fakery. The DeepFake algorithm creates videos out of images that it is fed. While suitably accurate, the AI fails to perfectly reproduce all physiological signals that humans naturally give off. Lyu and his team focused on one in particular: blinking. Humans typically blink spontaneously about once every two or three seconds. But as photos of people don’t usually have their eyes closed, training the algorithm on these will mean people in the videos rarely blink either.

So Lyu and his team designed an AI algorithm to detect where blinking was absent in faked videos. Their algorithm — a combination of two neural networks — first detects faces, then aligns all of the continuous images in the video, before analysing the eye regions in each. One part of the network decides whether the face has its eyes closed or not. The other serves as a memory system, remembering the decision from frame to frame, to determine if blinking has taken place over time.

First, they trained the AI on a labelled dataset of images with open and closed eyes. To test it out, they generated their own set of DeepFake videos, and even did a little post-processing to smooth the forgeries further.

The results were impressive. According to Lyu, their AI identified all of the faked videos.

Read next Startups are exploiting AI’s hazy definition to cash in on the hype Startups are exploiting AI’s hazy definition to cash in on the hype

It’s not a huge challenge to add in blinking manually with post-processing, Lyu explains, and some faked videos — including the BuzzFeed forgery — do indeed contain blinking. Nevertheless, this kind of strategy will serve to frustrate and delay the process of creating faked videos, with this algorithm at least. “We are forming the first line of defense,” says Lyu. “In the long run, it’s really an on-going battle between people making fake videos and people detecting them.”

This study fits into a broader endeavour. The research was sponsored by the Defense Advanced Research Projects Agency (DARPA), as part of their Media Forensics program, a project running from 2016 until 2020. Their goal is to develop a set of tools to check the authenticity and veracity of digitally-produced information, such as audio and video.

“We want to give the public assurance, that there is technology out there that can fight back this wave of fake media, and fake news,” says Lyu.

For Lev Manovitch, professor of computer science at the University of New York, this is also an example of a growing trend of competition between AIs. “We know well that computational data analysis can often detect patterns that may be invisible to a human,” he explains, “but what about detecting patterns left by another AI? Will we see in the future the cultural 'war' between AI,

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd