Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 1278

Associated Incidents

Incident 7127 Report
Google admits its self driving car got it wrong: Bus crash was caused by software

Loading...
Google autonomous SUV involved in serious crash after a van runs red light in Mountain View
bizjournals.com · 2016

A Lexus SUV with Google’s self-driving technology was involved in a serious crash on Friday after a human-operated vehicle ran a red light in Mountain View.

Video obtained by 9to5Google shows the Google-owned vehicle crossing the intersection at El Camino Real and Phyllis Avenue after the light had turned green for a full six seconds. A van marked "Interstate Batteries" then ran the red light. The Lexus was in self-driving mode.

All airbags deployed in the Lexus and there were no reported injuries. Google employees monitoring the autonomous car were visibly shaken, according to witnesses at the incident. The Google car suffered severe body damage and broken windows on its right side. It was towed away on a flatbed truck.

“Thousands of crashes happen everyday on U.S. roads and red-light running is the leading cause of urban crashes in the U.S.,” Google said in a statement, per 9to5Google. “Human error plays a role in 94 percent of these crashes, which is why we’re developing fully self-driving technology to make our roads safer.”

Reports of autonomous vehicle-involved crashes are becoming more frequent as self-driving cars hit public roads. Most of the accidents, however, are the result of human error. In July, Google’s self-driving car project experienced its first injury accident in Mountain View. Three Google employees suffered whiplash when a Google self-driving SUV was rear-ended when it came to a stop at the intersection of Phyllis Avenue and Martens Avenue.

Google isn’t the only Silicon Valley company with self-driving safety concerns. Earlier this year, Tesla saw the first fatality involving one of its vehicles that had the Autopilot feature engaged. Shortly after that accident, there was a non-fatal incident in China where a Tesla engaged in Autopilot sideswiped a Volkswagen parked on a Beijing highway. The incident did not result in any injuries and Tesla said that the driver was not holding the steering wheel.

Last week, the Obama administration released its Federal Automated Vehicles policy, which detailed a 15-point Safety Assessment for self-driving car manufacturers. Under the new regulations, companies like Google and Tesla will be required to share vast amounts of data with federal regulators regarding the building and testing of self-driving cars.

The companies will have to provide details on how the cars operate, how they record data, crash information and how they guard against hacking. They will also need to provide answers on how a vehicle’s software will manage ethical situations. The government will publish the responses in an annual report.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd