Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 645

Associated Incidents

Incident 425 Report
Uber AV Killed Pedestrian in Arizona

Loading...
Self-Driving Uber Investigation Reveals Handoff Problem
citylab.com · 2018

An interior view of operator Rafaela Vasquez moments before an Uber SUV hit a woman in Tempe, Arizona, in March 2018. Tempe Police Department/AP

The preliminary findings into a fatal crash in Tempe by the National Transportation Safety Board highlight the serious “handoff problem” in vehicle automation.

The first rule of safe flying: Pay attention, even when you think you don’t need to. According to a 1994 review by the National Transportation Safety Board, 31 of the 37 serious accidents that occurred on U.S. air carriers between 1978 and 1990 involved “inadequate monitoring.” Pilots, officers, and other crew members neglected to crosscheck instruments, confirm inputs, or speak up when they caught an error. Over the period of that study, aviation had moved into the automation era, as Maria Konnikova reported for The New Yorker in 2014. Cockpit controls that once required constant vigilance now maintained themselves, only asking for human intervention on an as-needed basis. The idea was reduce the margin of error via the precision of machines, and that was the effect, in some respects. But as planes increasingly flew themselves, pilots became more complacent. The computers had introduced a new problem: the hazardous expectation that a human operator can take control of an automated machine in the moments before disaster when their attention isn’t otherwise much required.

Decades later, a new NTSB report is fingering the same “handoff problem”—this time in the context of a self-driving Uber car. On Thursday, the NTSB released its preliminary findings from the federal investigation into a fatal crash by a self-driving Uber vehicle in Tempe, Arizona, on the night of March 18. The report found that sensors on the Volvo XC-90 SUV had detected 49-year-old Elaine Herzberg about six seconds before the vehicle hit her as she crossed an otherwise empty seven-lane road. But the vehicle, which was driving in autonomous mode with a backup operator behind the wheel, did not stop. Its factory-equipped automatic emergency braking system had been disabled, investigators found. Uber also turned off its own emergency braking function while the self-driving system was on, in order “to reduce the potential for erratic behavior,” according to the report. Video footage showed the backup driver looking down immediately before the car hit. In an interview with the NTSB, the operator, Rafaela Vasquez, said that she been monitoring the “self-driving interface,” not her smartphone, as earlier reports had speculated. In the absence of either automated emergency braking system, the company expected the backup driver to intervene at moment’s notice to prevent a crash. But in this case, the human operator braked only after the collision. Herzberg was killed. In my March investigation into Uber’s autonomous vehicle testing program, three former employees who worked as backup operators described an arduous work environment that led to exhaustion, boredom, and a false sense of security in the self-driving system. Prior to the Tempe crash, Uber drivers in Tempe, Phoenix, Pittsburgh, and San Francisco worked 8- to 10-hour shifts driving repetitive “loops” with few breaks. They weren’t driving—the car was, while the operators were expected to keep their eyes on the road and hands hovering over the wheel. There was a strict no-cellphone policy. “It was easy to get complacent with the system when you’re there for so long.” Towards the end of 2017, as Uber ramped up its ambition to accumulate testing miles, the AV development unit switched from a policy of having two backup operators in the car at all times to only one. Solo operators weren’t supposed to touch the computer interface, which showed the car’s LiDAR view and allowed them to make notes, without stopping the car first. But sometimes it was hard not to, said Ryan Kelley, a former operator who worked in Pittsburgh from 2017 to 2018. “It was nice to look at so you could see if the car was seeing what you were and if it was going to stop,” he told me via text. Moreover, without a second person to stay alert, and without their additional eyes on the road, “it was easy to get complacent with the system when you’re there for so long,” said Ian Bennett, a former Uber backup operator who also worked in Pittsburgh from 2016 to 2017. Especially as the car’s performance improved: “When nothing crazy happens with the car for months, it’s hard not to get used to it and to stay 100-percent vigilant.”

In March, I spoke with Bennett, Kelley, and one other anonymous former backup operator based in Tempe. They all agreed that the fatality could have been avoided had there been greater consideration to these human factors. Missy Cummings, the director of Duke University’s Humans and Autonomy Laboratory and a former U.S. Navy fighter pilot, has devoted her career to understanding this very dynamic. The results of the NTSB’s findings point to stark lack of car-to-human communication inside the vehicle, Cummin

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd