Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 1519

Loading...
Tencent Keen Security Lab: Experimental Security Research of Tesla Autopilot
keenlab.tencent.com · 2019

Introduction

With the rise of Artificial Intelligence, Advanced Driver Assistance System (ADAS) related technologies are under rapid development in the vehicle industry. Meanwhile, the security and safety of ADAS have also received extensive attention.

As a world-leading security research team, Tencent Keen Security Lab has been conducting continuous research in this area. At the Black Hat USA 2018 security conference, Keen Lab presented the first ever demonstration to remotely compromise the Autopilot[1] system on a Tesla Model S (The attack chain has been fixed immediately after we reported to Tesla)[2].

In later security research toward ADAS technologies, Keen Lab is focusing on areas like the AI model’s security of visual perception system, and architecture security of Autopilot system. Through deep experimental research on Tesla Autopilot, we acquired the following three achievements.

Research Findings

Auto-wipers Vision Recognition Flaw

Tesla Autopilot can identify the wet weather through image recognition technology, and then turn on the wipers if necessary. Based on our research, with an adversarial example craftily generated in the physical world, the system will be interfered and return an “improper” result, then turn on the wipers.

Lane Recognition Flaw

Tesla Autopilot recognizes lanes and assists control by identifying road traffic markings. Based on the research, we proved that by placing interference stickers on the road, the Autopilot system will capture these information and make an abnormal judgement, which causes the vehicle to enter into the reverse lane.

Control Steering System with a Gamepad

After compromised the Autopilot system on the Tesla Model S(ver 2018.6.1), Keen Lab further proved that we can control the steering system through the Autopilot system with a wireless gamepad, even when the Autopilot system is not activated by the driver.

Research Demonstration

Please find our research video below for the demonstration, or click here to see the video.

Technical Research Paper

For more technical details of our research, please refer the following link: Experimental Security Research of Tesla Autopilot.pdf

Feedback from Tesla

Tesla’s feedback on Autowipers:

“This research was demonstrated by displaying an image on a TV that was placed directly in front of the windshield of a car. This is not a real-world situation that drivers would face, nor is it a safety or security issue. Additionally, as we state in our Owners’Manual, the ‘Auto setting [for our windshield wipers] is currently in BETA.’ A customer can also elect to use the manual windshield wiper setting at any time.”

Tesla’s feedback on Lane Recognition:

“In this demonstration the researchers adjusted the physical environment (e.g. placing tape on the road or altering lane lines) around the vehicle to make the car behave differently when Autopilot is in use. This is not a real-world concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should be prepared to do so at all times.”

Tesla’s feedback for the “Control Steering System with a Gamepad” Research:

“The primary vulnerability addressed in this report was fixed by Tesla through a robust security update in 2017, followed by another comprehensive security update in 2018, both of which we released before this group reported this research to us. In the many years that we have had cars on the road, we have never seen a single customer ever affected by any of the research in this report.”

[1] https://www.tesla.com/autopilot

[2] https://www.blackhat.com/us-18/briefings/schedule/#over-the-air-how-we-remotely-compromised-the-gateway-bcm-and-autopilot-ecus-of-tesla-cars-10806

[3] https://keenlab.tencent.com/zh/2017/04/01/remote-attack-on-mi-ninebot/

[4] https://keenlab.tencent.com/en/2016/09/19/Keen-Security-Lab-of-Tencent-Car-Hacking-Research-Remote-Attack-to-Tesla-Cars/

[5] https://keenlab.tencent.com/en/2017/07/27/New-Car-Hacking-Research-2017-Remote-Attack-Tesla-Motors-Again/

[6] https://keenlab.tencent.com/zh/2018/05/22/New-CarHacking-Research-by-KeenLab-Experimental-Security-Assessment-of-BMW-Cars/

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd