Citation record for Incident 4

Suggested citation format

Olsson, Catherine. (2018-03-01) Incident Number 4. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
4
23
2018-03-01

Incidents Reports

A self-driving Uber in Tempe, Arizona, struck and killed a woman at a crosswalk yesterday (March 18), the New York Times reports.

The car was in autonomous mode, but had an Uber safety driver in the driver’s seat. The woman had walked into the street outside of a crosswalk and was hit by the car. She later died from her injuries. It’s believed that this is the first time an autonomous car has killed a pedestrian.

In the wake of the crash, Uber has now suspended all its self-driving car tests in the city, as well as in the Bay Area, Pittsburgh, and Toronto, according to The Wall Street Journal (paywall).

Uber is cooperating with the Tempe police investigation, the company told Quartz.

Uber’s self-driving cars have been involved in multiple fender-benders and traffic errors since they were first introduced to the streets of Pittsburgh in late 2016. Almost exactly a year ago, one of its cars flipped over in Tempe while in autonomous mode.

Arizona has experienced a surge of pedestrian fatalities recently, with more than 10 in a single week of March in Phoenix alone. The state has the highest rate of pedestrian fatalities in the United States....

A self-driving Uber just killed a pedestrian

SAN FRANCISCO — Arizona officials saw opportunity when Uber and other companies began testing driverless cars a few years ago. Promising to keep oversight light, they invited the companies to test their robotic vehicles on the state’s roads.

Then on Sunday night, an autonomous car operated by Uber — and with an emergency backup driver behind the wheel — struck and killed a woman on a street in Tempe, Ariz. It was believed to be the first pedestrian death associated with self-driving technology. The company quickly suspended testing in Tempe as well as in Pittsburgh, San Francisco and Toronto.

The death of a woman who was struck by an autonomous car operated by Uber is believed to be the first pedestrian fatality associated with self-driving technology. Newly released video offers clues about what happened.

The accident was a reminder that self-driving technology is still in the experimental stage, and governments are still trying to figure out how to regulate it.

Uber, Waymo and a long list of tech companies and automakers have begun to expand testing of their self-driving vehicles in cities around the country. The companies say the cars will be safer than regular cars simply because they take easily distracted humans out of the driving equation. But the technology is still only about a decade old, and just now starting to experience the unpredictable situations that drivers can face.

It was not yet clear if the crash in Arizona will lead other companies or state regulators to slow the rollout of self-driving vehicles on public roads.

Much of the testing of autonomous cars has taken place in a piecemeal regulatory environment. Some states, like Arizona, have taken a lenient approach to regulation. Arizona officials wanted to lure companies working on self-driving technology out of neighboring California, where regulators had been less receptive.

But regulators in California and elsewhere have become more accommodating lately. In April, California is expected to follow Arizona’s lead and allow companies to test cars without a person in the driver’s seat.

Federal policymakers have also considered a lighter touch. A Senate bill, if passed, would free autonomous-car makers from some existing safety standards and pre-empt states from creating their own vehicle safety laws. Similar legislation has been passed in the House. The Senate version has passed a committee vote but hasn’t reached a full floor vote.

“This tragic incident makes clear that autonomous vehicle technology has a long way to go before it is truly safe for the passengers, pedestrians, and drivers who share America’s roads,” said Senator Richard Blumenthal, Democrat of Connecticut.

The Uber car, a Volvo XC90 sport utility vehicle outfitted with the company’s sensing system, was in autonomous mode with a human safety driver at the wheel but carrying no passengers when it struck Elaine Herzberg, a 49-year-old woman, on Sunday around 10 p.m.

Sgt. Ronald Elcock, a Tempe police spokesman, said during a news conference that a preliminary investigation showed that the vehicle was moving around 40 miles per hour when it struck Ms. Herzberg, who was walking with her bicycle on the street. He said it did not appear as though the car had slowed down before impact and that the Uber safety driver had shown no signs of impairment. The weather was clear and dry.

Uber said it would work with the police.

“Our hearts go out to the victim’s family,” an Uber spokeswoman, Sarah Abboud, said in a statement. “We are fully cooperating with local authorities in their investigation of this incident.”

Tempe, with its dry weather and wide roads, was considered an ideal place to test autonomous vehicles. In 2015, Arizona officials declared the state a regulation-free zone in order to attract testing operations from companies like Uber, Waymo and Lyft.

“We needed our message to Uber, Lyft and other entrepreneurs in Silicon Valley to be that Arizona was open to new ideas,” Doug Ducey, Arizona’s governor, said in an interview in June 2017.

Using an executive order, Mr. Ducey opened the state to testing of autonomous vehicles that had safety drivers at the wheel, ready to take over in an emergency. He updated that mandate earlier this month to allow testing of unmanned self-driving cars, noting that a “business-friendly and low regulatory environment” had helped the state’s economy.

Even when an Uber self-driving car and another vehicle collided in Tempe in March 2017, city police and Mr. Ducey said that extra safety regulations weren’t necessary; the other driver was at fault, not the self-driving vehicle.

But on Monday, Mark Mitchell, Tempe’s mayor, called Uber’s decision to suspend autonomous vehicle testing a “responsible step” and cautioned people from drawing conclusions prematurely. Daniel Scarpinato, a spokesman for Mr. Ducey, said the updated order from the governor “provides enhanced enforcement measures and clarity on responsibility in these accidents.”

In California, where testing without a backup driver was just weeks away from being permitted, Jessica Gonzalez, a spokeswoman for the state Department of Motor Vehicles, said officials were in the process of gathering more information about the Tempe crash. Waymo, Lyft and Cruise, an autonomous vehicle company owned by General Motors, did not respond to requests for comment.

In a news release, the National Transportation Safety Board said it was sending a team of four investigators to examine “the vehicle’s interaction with the environment, other vehicles and vulnerable road users such as pedestrians and bicyclists.”

A self-driving Uber car at the scene of a fatal accident in Tempe, Ariz.Credit...ABC-15, via Associated Press

Since late last year, Waymo, the self-driving car unit of Google’s parent company Alphabet, has been using cars without a human in the driver’s seat to pick up and drop off passengers in Arizona.

Most testing of driverless cars occurs with a safety driver in the front seat who is available to take over if something goes wrong. It can be challenging, however, to take control of a fast-moving vehicle.

California requires companies to report the number of instances when human drivers are forced to take over for the autonomous vehicle, called “disengagements.”

Between December 2016 and November 2017, Waymo’s self-driving cars drove about 350,000 miles and human drivers retook the wheel 63 times — an average of about 5,600 miles between every disengagement. Uber has not been testing its self-driving cars long enough in California to be required to release its disengagement numbers.

Researchers working on autonomous technology have struggled with how to teach the systems to adjust for unpredictable human driving or behavior. Still, most researchers believe self-driving cars will ultimately be more safe than their human counterparts.

In 2016, 37,461 people died in traffic-related accidents in the United States, according to the National Highway Traffic Safety Administration. That amounts to 1.18 fatalities per 100 million vehicle miles traveled in 2016.

Waymo, which has been testing autonomous vehicles on public roads since 2009 when it was Google’s self-driving car project, has said its cars have driven more than 5 million miles while Uber’s cars have covered 3 million miles.

In 2016, a man driving his Tesla using Autopilot, the car company’s self-driving feature, died on a state highway in Florida when his car crashed into a tractor-trailer that was crossing the road. Federal regulators later ruled there were no defects in the system to cause the accident.

But the crash in Tempe will draw attention among the general public to self-driving cars, said Michael Bennett, an associate research professor at Arizona State University who has been looking into how people respond to driverless cars and artificial intelligence.

“We’ve imagined an event like this as a huge inflection point for the technology and the companies advocating for it,” he said. “They’re going to have to do a lot to prove that the technology is safe.”...

Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam

James Martin/CNET

A woman was struck and killed by one of Uber's self-driving cars in Tempe, Arizona, late Sunday night. This is the first known fatality from an autonomous vehicle accident on a public road.

At the time of the collision, 10 p.m., a vehicle operator was behind the wheel but the car was in autonomous mode, according to Tempe police. No passengers were in the car.

"The vehicle was traveling northbound just south of Curry Road when a female walking outside of the crosswalk crossed the road from west to east when she was struck by the Uber vehicle," the Tempe police said in a statement. "She was transported to a local area hospital where she passed away from her injuries."

The police said Uber is assisting in the still-active investigation. The ride-hailing company has also confirmed that it's temporarily halted its self-driving car operations in all other cities where it's been testing its vehicles, including Phoenix, Pittsburgh, San Francisco and Toronto.

"Our hearts go out to the victim's family," an Uber spokeswoman said in a statement. "We are fully cooperating with local authorities in their investigation of this incident."

Uber CEO Dara Khosrowshahi also tweeted his condolences on Monday morning,

"Some incredibly sad news out of Arizona," he said. "We're thinking of the victim's family as we work with local law enforcement to understand what happened."

Now playing: Watch this: Self-driving Uber kills pedestrian

Given how the woman suddenly emerged from the shadows and walked in front of the car, Tempe Police Chief Sylvia Moir said Uber may not have been at fault. Moir was able to look at the video feed from the cameras mounted in the car.

From viewing the videos, "it's very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway," Moir said in an interview with the San Francisco Chronicle.

Initially, the crash was thought to have been between a bicyclist and the Uber car, according to a report by TV station ABC15, which first reported the accident. But the woman was actually walking the bicycle, according to Tempe police.

Most companies working on self-driving cars tout the vehicles as a potentially safer alternative to human drivers. And, for the most part, testing of the technology has shown the cars to be safe. However, this isn't the first time an autonomous vehicle has been involved in a collision.

Google reported an accident with one of its self-driving cars in March 2016 and there have been at least three crashes involving Teslas in autopilot mode, one of which was fatal. Just last week, an Uber self-driving car in Pittsburgh was involved in a collision with another car. In that accident, no injuries were reported but both vehicles had serious damage.

First published March 19 at 9:07 a.m. PT.

Update, 10:36 a.m. PT: Adds that the victim was a pedestrian, not a bicyclist as initially reported.

Update, 11:07 a.m. PT: Adds comment from Tempe police, Uber spokeswoman and additional background information.

Update, 12:10 p.m. PT: Adds additional background information.

Update, 3:51 p.m. PT: Adds that the crash took place at 10 p.m. Sunday, not early Monday as was initially reported.

Update, March 20 at 7:45 a.m. PT: Adds comments from the Tempe, Arizona, police chief.

The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.

Blockchain Decoded: CNET looks at the tech powering bitcoin -- and soon, too, a myriad of services that will change your life....

Uber self-driving car kills pedestrian in Arizona

Tempe police said car was in autonomous mode at the time of the crash and that the vehicle hit a woman who later died at a hospital

An autonomous Uber car killed a woman in the street in Arizona, police said, in what appears to be the first reported fatal crash involving a self-driving vehicle and a pedestrian in the US.

Tempe police said the self-driving car was in autonomous mode at the time of the crash and that the vehicle hit a woman, who was walking outside of the crosswalk and later died at a hospital. There was a vehicle operator inside the car at the time of the crash.

Uber said in a statement on Twitter: “Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident.” A spokesman declined to comment further on the crash.

Who's driving? Autonomous cars may be entering the most dangerous phase Read more

The company said it was pausing its self-driving car operations in Phoenix, Pittsburgh, San Francisco and Toronto. Dara Khosrowshahi, Uber’s CEO, tweeted: “Some incredibly sad news out of Arizona. We’re thinking of the victim’s family as we work with local law enforcement to understand what happened.”

Uber has been testing its self-driving cars in numerous states and temporarily suspended its vehicles in Arizona last year after a crash involving one of its vehicles, a Volvo SUV. When the company first began testing its self-driving cars in California in 2016, the vehicles were caught running red lights, leading to a high-profile dispute between state regulators and the San Francisco-based corporation.

Police identified the victim as 49-year-old Elaine Herzberg and said she was walking outside of the crosswalk with a bicycle when she was hit at around 10pm on Sunday. Images from the scene showed a damaged bike. The 2017 Volvo SUV was traveling at roughly 40 miles an hour, and it did not appear that the car slowed down as it approached the woman, said Tempe sergeant Ronald Elcock.

Elcock said he had watched footage of the collision, which has not been released to the public. Police identified the operator of the car as Rafaela Vasquez, 44, and said she was cooperative and there were no signs of impairment.

Facebook Twitter Pinterest A still image taken from video provided by ABC-15 at the scene, where a pedestrian with a bicycle was hit. Photograph: AP

The self-driving technology is supposed to detect pedestrians, cyclists and others and prevent crashes.

John M Simpson, privacy and technology project director with Consumer Watchdog, said the collision highlighted the need for tighter regulations of the nascent technology.

“The robot cars cannot accurately predict human behavior, and the real problem comes in the interaction between humans and the robot vehicles,” said Simpson, whose advocacy group called for a national moratorium on autonomous car testing in the wake of the deadly collision.

Simpson said he was unaware of any previous fatal crashes involving an autonomous vehicle and a pedestrian.

Tesla Motors was the first to disclose a death involving a self-driving car in 2016 when the sensors of a Model S driving in autopilot mode failed to detect a large white 18-wheel truck and trailer crossing the highway. The car drove full speed under the trailer, causing the collision that killed the 40-year-old behind the wheel in the Tesla.

Earlier this year, California regulators approved the testing of self-driving cars on public roads without human drivers monitoring inside.

“The technology is not ready for it yet, and this just sadly proves it,” said Simpson.

In one recent incident, California police officers found a Tesla that was stopped in the middle of a five-lane highway and found a driver asleep behind the wheel. The man said the vehicle was in “autopilot”, which is Tesla’s semi-autonomous driver assist system, and he was arrested on suspicion of drunk driving.

Facebook Twitter Pinterest An Uber self-driving car travels in Pittsburgh, Pennsylvania. Photograph: Angelo Merendino/AFP/Getty Images

In another recent case, a Tesla car rear-ended a fire truck on a freeway, with the driver again telling the authorities the car was in autopilot mode at the time of the collision.

Michael G Bennett, an Arizona State University associate research professor who studies autonomous cars, said the self-driving vehicles have become ubiquitous around campus and on the streets in Tempe. Often they have operators behind the wheels, but sometimes they are fully autonomous with no human inside.

The fatal collision could spark significant calls for reform and reflections within the industry, he said.

“It may be problematic for the industry, because one of their central arguments for the value of the technology is that it is superior to human drivers,” said Bennett, adding that autonomous cars should be able to detect pedestrians and avoid hitting them, even if they aren’t in crosswalks: “Every day, pedestrians in cities around the world step outside of ...

Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian

Volvo

Sunday marked a turning point for self-driving cars. For the first time, a car in full autonomous mode struck and killed a pedestrian.

It happened at 10 p.m. in Tempe, Arizona, where ride-hailing company Uber had been picking up passengers in autonomous vehicles for more than a year.

Elaine Herzberg, 49, was walking her bicycle down a four-lane road and was starting to cross when the gray Volvo, operated by Uber, hit her at about 40 mph, according to local police. It's believed Herzberg was homeless. She was pronounced dead by the time she reached the hospital.

Although the car was driving itself, Uber vehicle operator Rafaela Vasquez, 44, was behind the wheel.

"Our investigation did not show at this time that there were significant signs of the vehicle slowing down," Tempe Police Sgt. Roland Elcock said in a press conference Monday. The police are "going to attempt to find who was at fault and how we can better be safe," he added.

The Tempe police on Wednesday released a video of the collision. It shows footage of Herzberg crossing the road outside the vehicle and of Vasquez at the moment of impact. It's graphic and difficult to watch.

"The video is disturbing and heartbreaking to watch, and our thoughts continue to be with Elaine's loved ones," an Uber spokeswoman said in an emailed statement Wednesday. "Our cars remain grounded, and we're assisting local, state and federal authorities in any way we can."

Almost every automaker in the world -- including General Motors, BMW, Ford, Tesla Motors and Toyota -- has plans to offer self-driving cars in the next few years. Several Silicon Valley giants such as Intel and Google have invested years in developing the technologies as well. The promise: greater safety, since such vehicles use software and sensors that let them "see" and react to their surroundings supposedly faster than humans can.

Read: Levels of self-driving cars

National Transportation Safety Board

Earlier this month, Uber also began regular operation of self-driving trucks hauling cargo in Arizona, after it wrapped up its testing period. The trucks always have a safety operator who's a licensed truck driver.

For the most part, testing of the technology has shown the cars to be safe. But autonomous technology is still a work in progress. The vast majority of car tests haven't been done on public roads, and the cars are still learning how to drive. Sunday's fatality has some people questioning if the cars belong on public roads at all.

"This is their vehicle, they put it on the road, and they're responsible for making sure it's safe," said consumer attorney Neama Rahmani, who isn't involved with any of the legal proceedings regarding the accident. "Uber needs to not implement any of this until it's fully vetted and tested and 100 percent safe."

Safety first

This wasn't the first time a self-driving car has been involved in a collision.

Google reported an accident with one of its cars in March 2016, and there have been at least three crashes involving Teslas in autopilot mode, one of which was fatal -- though Tesla warns owners that autopilot isn't fully autonomous. One of Uber's vehicles was involved in another crash in Tempe last year, but no serious injuries were reported. And just last week, an Uber self-driving car in Pittsburgh was in a collision with another car.

Uber's cars have also had some other near misses in Pittsburgh, including reports of fender-benders going the wrong way down one-way streets and ignoring traffic signals, according to Quartz. No injuries have been reported.

Within hours of Uber launching its autonomous vehicles in San Francisco in December 2016, one ran a red light. A dashboard camera video of the incident, captured by a Luxor Cab taxi, shows a self-driving Volvo SUV zooming through the light long after it turned red and coming uncomfortably close to a pedestrian.

"Sadly, this is a reminder of the risk that comes with rushing to get more and more self-driving cars on public roads before we know they are safe," said David Friedman, director of cars for Consumers Union, which is the advocacy division of Consumer Reports. "The promise of self-driving cars is that they can avoid fatalities, but this tragedy makes clear they have a long way to go."

What now?

After Sunday's fatality, some regulators are starting to question whether driverless cars are ready for public roads. Both the National Transportation Safety Board and the US Department of Transportation's National Highway Traffic Safety Administration confirmed they've sent teams to Tempe to investigate the collision alongside the local police department.

"The investigation will address the vehicle's interaction with the environment, other vehicles and vulnerable road users such as pedestrians and bicyclists," the National Transportation Safety Board said in a statement.

The Tempe Police Department would like to reaffirm that fault has not been determined in this case. Ronald Elcock, Tempe Police Sergeant

The NH...

Uber's driverless-car safety comes under scrutiny after fatality

National Transportation Safety Board

Uber has reached a settlement agreement with the family of the woman killed by an autonomous vehicle being tested by the ride-hailing company, according to Reuters.

Cristina Perez Hesano, an attorney with Bellah Perez, told the news agency that "the matter has been resolved." The terms of the agreement weren't revealed.

An Uber spokesperson declined to comment. Bellah Perez didn't immediately respond to a request for comment,

Uber's self-driving vehicle testing program has come under intense scrutiny since an accident in Tempe, Arizona, on March 18 resulted in the first fatality of a pedestrian from a car in full autonomous mode. A dashcam video of the incident released by Tempe police shows 49-year-old Elaine Herzberg walking her bike loaded with bags across a dark road.

The video, which stops at the moment of impact, also shows the vehicle operator, Rafaela Vasquez, sitting at the wheel constantly glancing down at her lap. She looks up just as the car collides with Herzberg, who was pronounced dead by the time she reached the hospital.

Arizona Gov. Doug Duccey said earlier this week he's suspending Uber's ability to test self-driving cars on the state's roads, saying he found video of the deadly collision "disturbing and alarming."

The Tempe police are working with Uber representatives, the National Transportation Safety Board and the US Department of Transportation's National Highway Traffic Safety Administration in their investigation to determine who, or what, was at fault for the accident.

iHate: CNET looks at how intolerance is taking over the internet.

Special Reports: CNET's in-depth features in one place....

Uber reportedly settles with family of victim in self-driving car crash

James Martin/CNET

Uber has reportedly found that a software problem likely caused a fatal accident involving one of its self-driving cars in Tempe, Arizona in March. That software is meant to determine how the car should react to detected objects, two people familiar with the matter told The Information.

Although the car's sensors reportedly detected the pedestrian, Uber's software determined that it didn't need to immediately react because of how it was tuned.

The software is supposed to ignore what are known as "false positives," or objects that wouldn't be an issue for the vehicle, like a plastic bag or piece of paper. Company executives told The Information that they believe the system was tuned in a way that made it react less to these objects -- meaning it reportedly didn't react fast enough when the pedestrian crossed the street.

During the collision, an operator was behind the wheel but the car was in autonomous mode. The operator was not looking at the road the moments before the car hit 49-year-old Elaine Herzberg at around 40 mph. Uber settled with the victim's family later that month. This was the first known fatality from an autonomous vehicle accident on a public road.

For now, Uber has temporarily halted its self-driving operations in all cities where it's been testing its vehicles, including Tempe, Phoenix, Pittsburgh, San Francisco and Toronto.

An Uber spokeswoman said the company has initiated a top-to-bottom safety review of its autonomous vehicle program and hired the former chair of the US National Transportation Safety Board, Christopher Hart, to advise the company on its overall safety culture.

"Our review is looking at everything from the safety of our system to our training processes for vehicle operators," the spokeswoman said.

Meanwhile, the Tempe police are working with Uber representatives, the NTSB and the US Department of Transportation's National Highway Traffic Safety Administration in their investigation to determine who, or what, was at fault for the accident. Uber declined to say whether the tuned-down software was responsible for the crash.

"We're actively cooperating with the NTSB in their investigation," the Uber spokeswoman said. "Out of respect for that process and the trust we've built with NTSB, we can't comment on the specifics of the incident."

First published May 7 at 11:29 a.m. PT.

Update, 11:40 a.m. PT: Adds background on the crash....

Uber's fatal self-driving crash reportedly caused by software

The fatal crash that killed pedestrian Elaine Herzberg in Tempe, Arizona, in March occurred because of a software bug in Uber's self-driving car technology, The Information's Amir Efrati reported on Monday. According to two anonymous sources who talked to Efrati, Uber's sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a "false positive" and decided it didn't need to stop for her.

Distinguishing between real objects and illusory ones is one of the most basic challenges of developing self-driving car software. Software needs to detect objects like cars, pedestrians, and large rocks in its path and stop or swerve to avoid them. However, there may be other objects—like a plastic bag in the road or a trash can on the sidewalk—that a car can safely ignore. Sensor anomalies may also cause software to detect apparent objects where no objects actually exist.

Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren't there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that's what happened in Tempe in March—and unfortunately the "real object" was a human being.

"There's a reason Uber would tune its system to be less cautious about objects around the car," Efrati wrote. "It is trying to develop a self-driving car that is comfortable to ride in."

"Uber had been racing to meet an end-of-year internal goal of allowing customers in the Phoenix area to ride in Uber’s autonomous Volvo vehicles with no safety driver sitting behind the wheel," Efrati added.

The more cautiously a car's software is programmed, the more often it will slam on its brakes unnecessarily. That will produce a safer ride but also one that's not as comfortable for passengers.

This provides some useful context for Efrati's March report that cars from Cruise, GM's self-driving car subsidiary, "frequently swerve and hesitate." He wrote that Cruise cars "sometimes slow down or stop if they see a bush on the side of a street or a lane-dividing pole, mistaking it for an object in their path."

You could read that as a sign that Cruise's software isn't very good. But you could also view it as a sign that Cruise's engineers are being appropriately cautious. It's obviously much better for software to produce a jerky, erratic ride than to provide a smooth ride that occasionally runs over a pedestrian. And such caution is especially warranted when you're testing in a busy urban environment like San Francisco.

Of course, the long-term goal is for self-driving cars to become so good at recognizing objects that false positives and false negatives both become rare. But Herzberg's death provides a tragic reminder that companies shouldn't get too far ahead of themselves. Getting fully self-driving cars on the road is a worthwhile goal. But making sure that's done safely is more important.

Uber declined to comment to The Information, citing confidentiality requirements related to an ongoing investigation by the National Transportation Safety Board. We've asked Uber for comment and will update if the company responds....

Report: Software bug led to death in Uber’s self-driving crash

SAN FRANCISCO -- Uber is pulling its self-driving cars out of Arizona. The ride-sharing company's reversal was triggered by the recent death of woman who was run over by one of its robotic vehicles while crossing a darkened street in a Phoenix suburb.

The decision announced Wednesday means Uber won't be bringing back its self-driving cars to the streets to Arizona, eliminating the jobs of about 300 people who served as backup drivers and performed other jobs connected to the vehicles.

Uber had suspended testing of its self-driving vehicles in Arizona, Pittsburgh, San Francisco and Toronto while regulators investigated the cause of a March 18 crash that killed 49-year-old Elaine Herzberg in Tempe, Arizona. It marked the first death involving a fully autonomous vehicle, raising questions about the safety of computer-controlled cars being built by Uber and dozens of other companies, including Google spin-off Waymo.

Tempe police said, meanwhile, that they have completed a report into the fatal March crash. They said the in-depth traffic collision investigation was submitted Wednesday to the Maricopa County Attorney's Office for review. It's still considered an active investigation and police aren't releasing the report or any details of their investigation.

How fatal accidents could change the self-driving car industry

Uber still plans to build and test self-driving cars, which the San Francisco company considers to be critical to maintaining its early lead in the ride-hailing market. This as Waymo and other rivals prepare to enter the field with robotic vehicles that may be able to offer cheaper fares.

In a Wednesday statement, Uber said its self-driving cars will return to Pittsburgh this summer. The company said it is focusing its efforts to build self-driving cars in that city as well as in San Francisco, although it didn't make a commitment to bring its robotic vehicles back to the streets of California, where it no longer has a permit to operate them after allowing its license in that state to expire earlier this year.

About 550 Uber employees will remain in Arizona working on its other operations in the state, including its traditional ride-hailing service with cars driven by humans responding to requests made through a mobile app.

Uber brought a fleet of self-driving cars to Arizona at the end of 2016, just days after the vehicles were banned from California for not having the proper permits at that time.

Arizona Gov. Doug Ducey suspends Uber's self-driving tests

California's action prompted Arizona Gov. Doug Ducey to send out a derisive tweet in an effort to persuade Uber to bring its self-driving cars to his state. "This is what OVER-regulation looks like!" Ducey wrote.

Ducey prohibited Uber from continuing its tests of self-driving cars after Herzberg was run over, a ban that a spokesman said Wednesday remains in effect.

"The governor's focus has always been on what's best for Arizonans and for public safety, not for any one company," said Ducey spokesman Daniel Scarpinato.

The fatal collision involving Uber's self-driving car added to the headaches vexing CEO Dara Khosrowshahi as he tries to repair the damage done by a regime led by his predecessor, Uber co-founder Travis Kalanick. The company is trying to recover from a wave of revelations and allegations about rampant sexual harassment in Uber's workforce, a cover-up of a massive data breach , dirty tricks and stolen trade secrets .

Khosrowshahi has promised he won't allow Uber's self-driving cars back on public roads again until he is convinced the vehicles are safe. That won't happen until Uber completes "a top-to-bottom safety review," according to a statement the company issued Wednesday. As part of that process, Uber hired Christopher Hart, a former chairman of the National Transportation Safety Board, to review its self-driving car program.

Meanwhile, Waymo is preparing to launch a ride-hailing service in Arizona that will pick up passengers in robotic cars that won't have humans to take control if the vehicle malfunctions. The service is supposed to begin before the end of this year....

Uber self-drives robo-cars out of Arizona after fatal crash

An interior view of operator Rafaela Vasquez moments before an Uber SUV hit a woman in Tempe, Arizona, in March 2018. Tempe Police Department/AP

The preliminary findings into a fatal crash in Tempe by the National Transportation Safety Board highlight the serious “handoff problem” in vehicle automation.

The first rule of safe flying: Pay attention, even when you think you don’t need to. According to a 1994 review by the National Transportation Safety Board, 31 of the 37 serious accidents that occurred on U.S. air carriers between 1978 and 1990 involved “inadequate monitoring.” Pilots, officers, and other crew members neglected to crosscheck instruments, confirm inputs, or speak up when they caught an error. Over the period of that study, aviation had moved into the automation era, as Maria Konnikova reported for The New Yorker in 2014. Cockpit controls that once required constant vigilance now maintained themselves, only asking for human intervention on an as-needed basis. The idea was reduce the margin of error via the precision of machines, and that was the effect, in some respects. But as planes increasingly flew themselves, pilots became more complacent. The computers had introduced a new problem: the hazardous expectation that a human operator can take control of an automated machine in the moments before disaster when their attention isn’t otherwise much required.

Decades later, a new NTSB report is fingering the same “handoff problem”—this time in the context of a self-driving Uber car. On Thursday, the NTSB released its preliminary findings from the federal investigation into a fatal crash by a self-driving Uber vehicle in Tempe, Arizona, on the night of March 18. The report found that sensors on the Volvo XC-90 SUV had detected 49-year-old Elaine Herzberg about six seconds before the vehicle hit her as she crossed an otherwise empty seven-lane road. But the vehicle, which was driving in autonomous mode with a backup operator behind the wheel, did not stop. Its factory-equipped automatic emergency braking system had been disabled, investigators found. Uber also turned off its own emergency braking function while the self-driving system was on, in order “to reduce the potential for erratic behavior,” according to the report. Video footage showed the backup driver looking down immediately before the car hit. In an interview with the NTSB, the operator, Rafaela Vasquez, said that she been monitoring the “self-driving interface,” not her smartphone, as earlier reports had speculated. In the absence of either automated emergency braking system, the company expected the backup driver to intervene at moment’s notice to prevent a crash. But in this case, the human operator braked only after the collision. Herzberg was killed. In my March investigation into Uber’s autonomous vehicle testing program, three former employees who worked as backup operators described an arduous work environment that led to exhaustion, boredom, and a false sense of security in the self-driving system. Prior to the Tempe crash, Uber drivers in Tempe, Phoenix, Pittsburgh, and San Francisco worked 8- to 10-hour shifts driving repetitive “loops” with few breaks. They weren’t driving—the car was, while the operators were expected to keep their eyes on the road and hands hovering over the wheel. There was a strict no-cellphone policy. “It was easy to get complacent with the system when you’re there for so long.” Towards the end of 2017, as Uber ramped up its ambition to accumulate testing miles, the AV development unit switched from a policy of having two backup operators in the car at all times to only one. Solo operators weren’t supposed to touch the computer interface, which showed the car’s LiDAR view and allowed them to make notes, without stopping the car first. But sometimes it was hard not to, said Ryan Kelley, a former operator who worked in Pittsburgh from 2017 to 2018. “It was nice to look at so you could see if the car was seeing what you were and if it was going to stop,” he told me via text. Moreover, without a second person to stay alert, and without their additional eyes on the road, “it was easy to get complacent with the system when you’re there for so long,” said Ian Bennett, a former Uber backup operator who also worked in Pittsburgh from 2016 to 2017. Especially as the car’s performance improved: “When nothing crazy happens with the car for months, it’s hard not to get used to it and to stay 100-percent vigilant.”

In March, I spoke with Bennett, Kelley, and one other anonymous former backup operator based in Tempe. They all agreed that the fatality could have been avoided had there been greater consideration to these human factors. Missy Cummings, the director of Duke University’s Humans and Autonomy Laboratory and a former U.S. Navy fighter pilot, has devoted her career to understanding this very dynamic. The results of the NTSB’s findings point to stark lack of car-to-human communication inside the vehicle, Cummin...

Self-Driving Uber Investigation Reveals Handoff Problem

Back in March, an Uber self-driving car killed 49-year-old Elain Herzberg in Tempe, Arizona, after failing to do an emergency stop. After a US federal investigation, it is thought that the car did not stop because the system put in place to carry out emergency stops in dangerous situations was disabled .

National Transportation Safety Board

So, how do self-driving cars actually work? Most self-driving cars have a GPS unit, a range of sensors such as radar, video and laser rangefinders as well as a navigation system. All of these features are there to help the car become aware of its surroundings in order to navigate without the need for human input. Radar is one of the major ways a self-driving car maps its environment. Radar stands for Radio Detection And Ranging which in simple terms is just the transmission of radio waves into a particular direction and the detection of the reflected waves, or echoes, from objects in the surrounding environment. The radio waves are produced by an antenna where electrons are constantly moving up and down via an alternating current. The radio waves are transmitted in pulses and the time taken for the pulse to return provides an indicator into how far that object is. Similarly, laser rangefinders work by measuring the time taken from when the laser beam was transmitted to when it was reflected back.

Data from the sensors, with the help of GPS and an inertial navigation system, allows the car to build a 3D map of its environment and make a decision for its next move via its control system. The algorithm that makes up the control system helps determine a long-range plan for the cars’ next moves where all dangerous or impossible moves are eliminated and harmful objects are taken into account. Once the best path is decided, the moves from the control system are then fed into the actuators of the car which control steering, throttle, and braking.

Regarding the Uber self-driving car, it was found that the emergency braking maneuvers were disabled whilst the car was being controlled by a computer as stated by the National Transportation Safety Board. The sensors on the Volvo XC-90 SUV spotted the woman but because of the disabled emergency braking features the car did not apply the brakes and the human backup driver in the car was not warned, hence, did not intervene in time. It was found that the car was traveling at 43 mph and needed to break 1.3 seconds before impact.

Whilst regulators continue to investigate the cause of the crash, Uber has suspended tests on their self-driving cars in San Francisco, Toronto, Arizona, and Pittsburgh. A spokesperson for Arizona governor Doug Ducey says, ‘We await the more thorough and final investigative report. Uber’s self-driving vehicle suspension remains in place.’...

Uber Self-Driving Car Crash: What Really Happened

economist.com · 2018

THEY are one of the most talked-about topics in technology—but lately they have been for all the wrong reasons. A series of accidents involving self-driving cars has raised questions about the safety of these futuristic new vehicles, which are being tested on public roads in several American states. In March 2018 an experimental Uber vehicle, operating in autonomous mode, struck and killed a pedestrian in Tempe, Arizona—the first fatal accident of its kind. On May 24th America’s National Transportation Safety Board (NTSB) issued its preliminary report into the crash. What caused the accident, and what does it say about the safety of autonomous vehicles (AVs) more broadly?

The computer systems that drive cars consist of three modules. The first is the perception module, which takes information from the car’s sensors and identifies relevant objects nearby. The Uber car, a modified Volvo XC90, was equipped with cameras, radar and LIDAR (a variant of radar that uses invisible pulses of light). Cameras can spot features such as lane markings, road signs and traffic lights. Radar measures the velocity of nearby objects. LIDAR determines the shape of the car’s surroundings in fine detail, even in the dark. The readings from these sensors are combined to build a model of the world, and machine-learning systems then identify nearby cars, bicycles, pedestrians and so on. The second module is the prediction module, which forecasts how each of those objects will behave in the next few seconds. Will that car change lane? Will that pedestrian step into the road? Finally, the third module uses these predictions to determine how the vehicle should respond (the so-called “driving policy”): speed up, slow down, or steer left or right.

Get our daily newsletter Upgrade your inbox and get our Daily Dispatch and Editor's Picks.

Of these three modules, the most difficult to build is the perception module, says Sebastian Thrun, a Stanford professor who used to lead Google’s autonomous-vehicle effort. The hardest things to identify, he says, are rarely-seen items such as debris on the road, or plastic bags blowing across a highway. In the early days of Google’s AV project, he recalls, “our perception module could not distinguish a plastic bag from a flying child.” According to the NTSB report, the Uber vehicle struggled to identify Elaine Herzberg as she wheeled her bicycle across a four-lane road. Although it was dark, the car’s radar and LIDAR detected her six seconds before the crash. But the perception system got confused: it classified her as an unknown object, then as a vehicle and finally as a bicycle, whose path it could not predict. Just 1.3 seconds before impact, the self-driving system realised that emergency braking was needed. But the car’s built-in emergency braking system had been disabled, to prevent conflict with the self-driving system; instead a human safety operator in the vehicle is expected to brake when needed. But the safety operator, who had been looking down at the self-driving system’s display screen, failed to brake in time. Ms Herzberg was hit by the vehicle and subsequently died of her injuries.

The cause of the accident therefore has many elements, but is ultimately a system-design failure. When its perception module gets confused, an AV should slow down. But unexpected braking can cause problems of its own: confused AVs have in the past been rear-ended (by human drivers) after slowing suddenly. Hence the delegation of responsibility for braking to human safety drivers, who are there to catch the system when an accident seems imminent. In theory adding a safety driver to supervise an imperfect system ensures that the system is safe overall. But that only works if they are paying attention to the road at all times. Uber is now revisiting its procedures and has suspended all testing of its AVs; it is unclear when, or even if, it will be allowed to resume testing. Other AV-makers, having analysed video from the Tempe accident, say their systems would have braked to avoid a collision. In the long term, AVs promise to be much safer than ordinary cars, given that 94% of accidents are caused by driver error. But right now the onus is on Uber and AV-makers to reassure the public that they are doing everything they can to avoid accidents on the road to a safer future.

Dig deeper

What it’s like to ride in a self-driving Uber (Mar 2018)

Why self-driving cars will be mostly shared, not owned (Mar 2018)

Reinventing wheels: a special report on self-driving cars (Mar 2018)...

The Economist explains

Tempe police released photographs from the pedestrian death involving an Uber self-driving car. A 49-year-old woman was hit and killed by a self-driving Volvo operated by Uber while crossing a street in Tempe (Photo: Tempe Police Department)

The driver behind the wheel of an autonomous Uber vehicle that fatally struck a woman in Tempe in March was watching "The Voice" via a streaming service in the minutes leading up to the crash, a police report says.

The detailed report of more than 300 pages was released by Tempe police Thursday night, along with video and photos from the scene of the March 18 collision. Also released was the 911 call made by the driver, Rafaela Vasquez, 44, after the crash.

The documents indicate police are seeking manslaughter charges against Vasquez.

The Mill Avenue collision, which killed 49-year-old Elaine Herzberg as she walked across the street midblock, was the first fatal crash with a pedestrian and a self-driving car.

UBER IN ARIZONA: A timeline of events leading up to shutdown of self-driving cars

The material includes blurred video from officers' body cameras. One video captures an officer's conversation with Vasquez still seated behind the wheel.

"The car was in auto-drive," Vasquez says to the officer.

"All of a sudden ... the car didn't see it, I couldn't see it," she says. "I know I hit her.''

CLOSE Tempe police released body camera video in the nation's first pedestrian death involving an autonomous vehicle. A woman was hit while crossing a street. Tempe Police Department

Uber said Friday that interacting with any mobile device, including smart watches, while operating one of its vehicles on a public road is a fireable offense, and that policy is made clear to employees in training and through workplace posters.

In light of the accident, the company plans to bolster its safety training, the company said.

“We have a strict policy prohibiting mobile device usage for anyone operating our self-driving vehicles," a company spokesperson said in a written statement. "We plan to share more on the changes we’ll make to our program soon.”

Vasquez was let go from the company along with all the other autonomous drivers in Arizona when Uber decided to end its tests here in May, the company official said.

Vasquez was trained and expected to remain attentive to take control of the vehicle and avoid a collision, according to the company.

"Our system is a developmental self-driving system, which requires the close attention of a human operator at all times," Uber's spokesperson said. "Our operators are expected to maintain attentiveness to the road ahead and react when the system fails to do so, and are trained to uphold this responsibility."

Vasquez was given a field test and police initially determined she was not impaired. A few days after the crash, police obtained a search warrant for Vasquez's two cellphones and served warrants on three companies that provide streaming services — Hulu, Netflix and Google, which owns YouTube — in an effort to determine if the driver had been watching shows on her phones while driving.

One of those providers, Hulu, later provided a record of usage on one of Vasquez's phones that showed she was watching "The Voice,'' a talent competition show on NBC, right before the collision. The Hulu record showed her streaming ended at 21:59 hours — or 9:59 p.m.

NEWSLETTERS Get the AZ Memo newsletter delivered to your inbox We're sorry, but something went wrong Get the pulse of Arizona -- Local news, in-depth state coverage and what it all means for you Please try again soon, or contact Customer Service at 1-800-332-6733. Delivery: Mon-Fri Invalid email address Thank you! You're almost signed up for AZ Memo Keep an eye out for an email to confirm your newsletter registration. More newsletters

The crash occurred at 10 p.m., according to records.

Tempe police, in the report, reviewed video from inside the Volvo XC90 — some of which previously was made public — that showed Vasquez looking down moments before the crash.

“She appears to be looking down at the area near her right knee at various points in the video,'' the report says. "Sometimes, her face appears to react and show a smirk or laugh at various points during the times that she is looking down. Her hands are not visible in the frame of the video during these times.’’

The report details an exhaustive analysis of data from the vehicle and re-enacting the crash at the site.

CLOSE Tempe police have released the audio of Rafaela Vasquez, 44, calling 911 after fatally hitting a pedestrian on March 18, 2018 Arizona Republic

The analysis showed that nine video segments from dashboard cameras in the vehicle covered 11.8 miles prior to the crash. During that distance, Vasquez looked down 204 times toward her right knee, the report says. Of the nearly 22 minutes that elapsed during that distance, Vasquez was looking down for 6 minutes and 47 seconds.

"This crash would not have occurred if Vasquez would have been monit...

Tempe police release report, audio, photo

Police in Tempe, Arizona, have released a new report regarding Uber’s fatal self-driving car crash last March, which reveals that the safety driver Rafaela Vasquez was streaming The Voice on Hulu on her phone at the time of the accident, via Reuters.

The crash killed 49-year old Elaine Herzberg, who was crossing the street with a bicycle when Uber’s self-driving test car struck her at 39 mph. Why the car failed to avoid her has been the subject of scrutiny from both officials and the wider industry. The Tempe Police Department’s 318-page report of its investigation of the incident found that Vasquez’s Hulu account was watching the show for 42 minutes prior to the crash, right up until the time the accident occurred. According to the National Transportation Safety Board’s (NTSB) timeline of the crash, that means that Vasquez would have started watching her show within minutes of starting the drive.

The report claims that the crash was “entirely avoidable”

The report claims that the crash was “entirely avoidable” had Vasquez been paying more attention. Vasquez could face charges of vehicular manslaughter, although police haven’t said whether or not she will be charged yet.

In a video of the crash released by police officers, Vasquez can be seen looking down in front of her for several seconds before the crash. Vasquez had previously told investigators from the NTSB that she had been “monitoring the self-driving system interface,” which is displayed on an iPad mounted on the vehicle’s center console, at the time of the crash. She also said that both her personal and business phones were with her in the vehicle, but neither was in use until after the crash.

Uber says that in the past it didn’t monitor its safety drivers in real time, although it did perform spot checks to try and make sure they were following the rules. It also says a handful of drivers were previously fired for using a cell phone while operating test cars, though it didn’t say exactly how many. In a statement released to The Verge, an Uber spokesperson commented that “We continue to cooperate fully with ongoing investigations while conducting our own internal safety review. We have a strict policy prohibiting mobile device usage for anyone operating our self-driving vehicles. We plan to share more on the changes we’ll make to our program soon.”

One of the main reasons that the company places safety drivers in its test vehicles in the first place is to intervene in situations like this. According to the NTSB’s preliminary report on the incident, Uber’s self-driving car was able to detect Herzberg as she was attempting to cross the road but failed to brake. Per that report, the car’s radar and LIDAR sensors were able to detect an object in the path of the vehicle — Herzberg — about six seconds before the crash, but it misclassified her as an unknown object, then a vehicle, then finally a bicycle. At 1.3 seconds before impact, the system tried to initiate an emergency braking maneuver, but Uber had deactivated the vehicle’s factory-equipped automated emergency braking system to help ensure less erratic testing.

In the wake of the crash, Uber suspended all self-driving tests in the state of Arizona, although Uber CEO Dara Khosrowshahi said in May that the company plans to resume self-driving tests in a few months and is still “absolutely committed to self-driving cars.”...

Safety driver of fatal self-driving Uber crash was reportedly watching Hulu at time of accident

Police in Tempe, Arizona said evidence showed the "safety" driver behind the wheel of a self-driving Uber was distracted and streaming a television show on her phone right up until about the time of a fatal accident in March, deeming the crash that rocked the nascent industry "entirely avoidable."

A 318-page report from the Tempe Police Department, released late on Thursday in response to a public records request, said the driver, Rafaela Vasquez, repeatedly looked down and not at the road, glancing up just a half second before the car hit 49-year-old Elaine Herzberg, who was crossing the street at night.

According to the report, Vasquez could face charges of vehicle manslaughter. Police said that, based on testing, the crash was "deemed entirely avoidable" if Vasquez had been paying attention.

Police obtained records from Hulu, an online service for streaming television shows and movies, which showed Vasquez's account was playing the television talent show "The Voice" the night of the crash for about 42 minutes, ending at 9:59 p.m., which "coincides with the approximate time of the collision," the report says.

It is not clear if Vasquez will be charged, and police submitted their findings to county prosecutors, who will make the determination. The Maricopa County Attorney's Office referred the case to the Yavapai County Attorney's office because of a conflict and that office could not be reached late Thursday.

Vasquez could not immediately be reached for comment and Reuters could not locate her attorney.

The Uber car was in autonomous mode at the time of the crash, but Uber, like other self-driving car developers, requires a back-up driver in the car to intervene when the autonomous system fails or a tricky driving situation occurs.

Vasquez looked up just 0.5 seconds before the crash, after keeping her head down for 5.3 seconds, the Tempe Police report said. Uber's self-driving Volvo SUV was traveling at just under 44 miles-per-hour.

Uber declined to comment.

Last month, an Uber spokeswoman said the company was undergoing a "top-to-bottom safety review," and had brought on a former federal transportation official to help improve the company's safety culture. The company prohibits the use of any mobile device by safety drivers while the self-driving cars are on a public road, and drivers are told they can be fired for violating this rule.

Police said a review of video from inside the car showed Vasquez was looking down during the trip, and her face "appears to react and show a smirk or laugh at various points during the times that she is looking down." The report found that Vasquez "was distracted and looking down" for close to seven of the nearly 22 minutes prior to the collision.

Tempe Police Detective Michael McCormick asked Hulu for help in the investigation, writing in a May 10 email to the company that "this is a very serious case where the charges of vehicle manslaughter may be charged, so correctly interpreting the information provided to us is crucial." Hulu turned over the records on May 31.

According to a report last month by the National Transportation Safety Board, which is also investigating the crash, Vasquez told federal investigators she had been monitoring the self-driving interface in the car and that neither her personal nor business phones were in use until after the crash. That report showed Uber had disabled the emergency braking system in the Volvo, and Vasquez began braking less than a second after hitting Herzberg.

Herzberg, who was homeless, was walking her bicycle across the street, outside of a crosswalk on a four-lane road, the night of March 18 when she was struck by the front right side of the Volvo.

The police report faulted Herzberg for "unlawfully crossing the road at a location other than a marked crosswalk."

In addition to the report, police released on Thursday a slew of audio files of 911 calls made by Vasquez, who waited at the scene for police, and bystanders the night of the crash; photographs of Herzberg's damaged bicycle and the Uber car; and videos from police officers' body cameras that capture the minutes after the crash, including harrowing screams in the background.

The crash dealt Uber a major setback in its efforts to develop self-driving cars, and the company shuttered its autonomous car testing program in Arizona after the incident. It says it plans to begin testing elsewhere this summer, although in some cities it will have to first win over increasingly wary regulators....

Uber driver streamed The Voice before self-driving car crash

Tempe Police have released new body camera video and the police report from the night of March 19, when a self-driving Uber hit and killed a pedestrian mid-block on Mill Avenue. The backup driver was streaming the TV show "The Voice" before fatally striking the pedestrian, the police report said.

The case has now been handed over to the Yavapai County Attorney for review of possible charges against the backup driver, Rafaela Vasquez. The Maricopa County Attorney's Office was originally given the case, but handed it over to Yavapai because of a "conflict of interest."

Vasquez was still in the driver's seat of the Uber when she was questioned by officers.

"Are you alright?" an officer asked.

"Yeah," Vasquez said. "I'm just shaken up."

Vasquez is seen giving the officers her license and insurance and asked about the victim.

"Is the person OK?" Vasquez asked.

The pedestrian, Elaine Herzberg, was killed in the collision. New pictures show her bike left behind and a big dent on the right front end of the Uber.

"So what exactly happened?" an officer asked.

"The car was in auto-drive and all of a sudden, I didn't see it," Vasquez said. "The car didn't see it. And all of a sudden it was just there. Shot out in front and I know I hit them."

In a separate video released by Uber in March, Vasquez is seen looking down for a few seconds at a time. But it's unclear if it has something to do with the program.

According to a police report, investigators noted Vazquez looking down and to her right side multiple times. Before the collision occurred, Vasquez's eyes were averted from the roadway for approximately 5.3 seconds.

"Sometimes, her face appears to react and show a smirk or laugh at various points during the times that she is looking down," the report said. "Her hands are not visible in the frame of the video during these times."

Officers calculated that had Vasquez been paying attention, she could have reacted 143 feet before impact and brought the SUV to a stop about 42.6 feet before hitting Herzberg.

"This crash would not have occurred if Vasquez would have been monitoring the vehicle and roadway conditions and was not distracted," the report stated.

After the crash, Vasquez was given a field sobriety test.

"Your internal clock is a little slow, but not anything alarming," the officer is heard saying.

"I'm sick over what happened," Vasquez told the officer.

The Tempe police report says Vasquez had a “disregard for assigned job function to intervene in a hazardous situation” and the crash may have been avoided if she was watching the road.

She told officers she "slammed" on the brakes immediately after hitting the woman but Herzberg "came out of nowhere and she didn't see her," the police report said.

In the police report, the investigator explained there were no skid marks or antilock brake marks from the autonomous vehicle that would have indicated "dynamic braking" prior to the crash.

The report also says Herzberg unlawfully crossing the road at an unmarked location was a factor in the crash.

After the crash occurred, officers attempted to locate Vasquez at her Phoenix home but were unsuccessful. She was later found in Tucson and her personal and work cell phones were seized as evidence.

As part of the investigation, Tempe police reached out to YouTube, Netflix and Hulu for the video viewing history of both cell phones. Netflix was the first to respond, saying there was no activity during the specified time period on either phones.

On May 31, Hulu responded saying the most accurate way to measure playback times is to look at player "heartbeats."

It was discovered based on Hulu's heartbeat data that a user on Vasquez's account was watching an episode of "The Voice" during the time of the collision. However, the heartbeats were not consistent, meaning the user may have paused the player, switched apps, experienced buffering issues or lost service.

Uber has since shut down their self-driving program in the Valley. The victim's family reached a settlement with Uber.

The detective seeking the warrant, identified as J. Barutha, wrote that based on information from the vehicular homicide unit, "it is believed that the crime of vehicular manslaughter has occurred and that evidence of this offense is currently located in a 2017 Grey Volvo XC-90."

The National Transportation Safety Board, in a preliminary report issued last month, said the autonomous driving system on Uber's Volvo XC-90 SUV spotted Herzberg about six seconds before hitting her, but did not stop because the system used to automatically apply brakes in potentially dangerous situations had been disabled.

RELATED: Uber announces more security for riders, including ability to dial 911 from the app

The system is disabled while Uber's cars are under computer control, "to reduce the potential for erratic vehicle behavior," the NTSB report said. Instead of the system, Uber relies on the human backup driver to intervene, the report stated. But the system is not desig...

Arizona driver in deadly self-driving Uber crash could face charges

Photo: Tempe Police Department

The crash of an Uber self-driving car that killed an Arizona woman in March was “entirely avoidable,” according to police reports released by the Tempe Police Department. Cellphone data obtained by police suggests that the Uber operator was also streaming an episode of reality show The Voice at the time of the fatal incident.

The documents, released to Gizmodo in response to a public records request, show that Tempe police found that the operator of the Uber autonomous vehicle could likely have avoided the fatal crash, had she been paying attention—but instead she was likely watching a video on her phone. Police also noted that Uber’s vehicles apparently did not alert operators to take over the vehicle during incidents.

Advertisement

After the crash, Uber laid off 300 test drivers from its autonomous vehicle unit and halted testing in Arizona. An Uber spokesperson noted that using any device behind the wheel, even a smartwatch, would be considered a fireable offense.

The crash killed Elaine Herzberg, a 49-year-old woman who was walking across the street with a bicycle when she was struck by the Uber vehicle. Uber’s autonomous technology failed to detect Herzberg, and the human driver who was supposed to act as a safeguard appeared distracted in previously released footage. Now, several documents in the 318-page report suggest the driver, Rafaela Vasquez, may have been streaming an episode of The Voice at the time of the crash.

“The driver in this case could have reacted and brought the vehicle to a stop 42.61 feet prior to the pedestrian,” one of the documents concluded.

Advertisement

However, Tempe police also noted that Uber’s vehicles did not alert autonomous vehicle operators about when to take control of their cars.

“During the current development phase, vehicle operators are relied upon to perform evasive maneuvers,” one Tempe detective wrote. “I was not able to find anywhere in the literature that the self-driving systems alerts the vehicle operator to potential hazards or when they should take manual control of the vehicle to perform an evasive maneuver.” Uber’s spokesperson said that the company is reviewing safety practices and procedures and has hired former National Transportation Safety Board chair Christopher Hart as a safety advisor.

In a statement, an Uber spokesperson said the company’s policy prohibits drivers of its autonomous vehicles from using mobile devices while behind the wheel.

Advertisement

“We continue to cooperate fully with ongoing investigations while conducting our own internal safety review. We have a strict policy prohibiting mobile device usage for anyone operating our self-driving vehicles,” the spokesperson told Gizmodo. “We plan to share more on the changes we’ll make to our program soon.”

Following the crash, police obtained warrants for work and personal cellphones belonging to Vasquez. Police also sent search warrants to YouTube, Netflix, and Hulu to recover Vasquez’s viewing history on her devices around the time of the accident.

“The driver in this case could have reacted and brought the vehicle to a stop 42.61 feet prior to the pedestrian.”

Advertisement

The crash occurred around 10pm—more specifically, the report states that video from inside the car “ceases recording with a UTC time of 4:58:50 [9:58pm local time], just moments after striking the pedestrian.”

The data Hulu provided to authorities shows Vasquez was streaming an episode of The Voice called “The Blind Auditions, Part 5" between 9:16pm and 9:59pm local time, according to account history turned over to the police by Hulu’s legal team. In response to the warrants, YouTube and Netflix said that Vasquez was not actively viewing video on either platform at the time of the crash.

Screenshot: Tempe Police

Advertisement

Hulu’s legal team initially released viewing data for another user’s account, according to the police reports, then corrected the error in response to follow-up questions from the Tempe police and released Vasquez’s data.

In reviewing footage collected from Uber’s own cameras, which filmed the car’s view of the road and Vasquez as she was driving, police determined that Vasquez was frequently distracted.

“She appears to be looking down at the area near her right knee at various points in the video,” the report reads. “During the 9 video clips, I found that the driver looked down 204 times with nearly all of them having the same eye placement at the lower center console near her right knee. One hundred sixty-six of these instances of looking down occurred while the vehicle was in motion.” Vasquez was appeared to laugh or smirk during moments when she was looking towards her knee, the report added.

Advertisement

Police didn’t just note the number of instances her eyes were off the road, they also focused on the aggregate amount of time that Vasquez was apparently distracted. “The vehicle was in motion for 21 minutes, 48 seconds. Of that time, the total amount of t...

Uber Driver in Fatal Tempe Crash May Have Been Watching The Voice Behind the Wheel

The person behind the wheel of the Uber self-driving Volvo SUV that struck and killed a woman in Arizona was likely streaming "The Voice" on a cellphone at the time of the incident.

The Tempe Police Department released a 318-page document late Thursday that sheds new light on what likely happened when the vehicle hit pedestrian Elaine Herzberg, 49, in March as she walked a bicycle across a road in Tempe. According to an analysis of the incident, police said "the crash was deemed entirely avoidable."

The test driver, Rafaela Vasquez, who was hired by Uber to sit behind the wheel and take over in case of emergencies, had both personal and business phones in the car at the time of the crash. Tempe Police Department's investigation requested data from Hulu, YouTube and Netflix as part of its investigation. The apps were found on one or both of Vasquez's phones.

According to data provided by Hulu, the driver was streaming NBC's popular show "The Voice" on the Hulu app for about 40 minutes. The end time coincides with the crash. Netflix and YouTube reportedly weren't in use.

Related: Uber shuts down self-driving operations in Arizona

During nine video segments obtained from the vehicle's dashcam, Vasquez looked down 204 times. The report states each of those times had "the same eye placement at the lower center console near the area of her right knee."

Vasquez's eyes were averted for a total of more than six minutes, or 3.67 miles out of 11.8 miles traveled during the trip. Vasquez had previously told National Transportation Safety Board investigators in a post-crash interview that the phones weren't in use.

Vasquez could face charges for vehicular manslaughter, according to Tempe Police Department.

Vasquez could not immediately be reached for comment.

Uber said it is cooperating with ongoing investigations and conducting its own internal safety review.

"We have a strict policy prohibiting mobile device usage for anyone operating our self-driving vehicles. We plan to share more on the changes we'll make to our program soon," according to an Uber spokesperson

The spokesperson added that any physical mobile device usage while the vehicle is on the road is a fireable offense.

The crash was a major setback for Uber's self-driving car operation. In May, Uber said it was ending self-driving car testing in Arizona and laying off 300 Uber workers. The company said it would focus on its autonomous vehicle efforts in San Francisco and Pittsburgh.

This is the latest development in the ongoing investigation into the fatal crash. In May, the National Transportation Safety Board determined Uber's self-driving car accurately identified Herzberg, but Uber had turned off the vehicle's automatic emergency braking, so the SUV did not attempt to brake. The feature was disabled to reduce the potential for unwanted braking, such as for a plastic bag in the road.

Earlier this year, Uber said it employs about 400 human safety drivers like Vasquez across various cities.

CNNMoney's Matt McFarland contributed to this report...

Uber operator in fatal self-driving vehicle crash was likely streaming 'The Voice'

SAN FRANCISCO/WASHINGTON (Reuters) - The safety driver behind the wheel of a self-driving Uber car in Tempe, Arizona, was streaming a television show on her phone until about the time of a fatal crash, according to a police report that deemed the March 18 incident “entirely avoidable.”

A report by the Tempe Police Department said the driver, Rafaela Vasquez, repeatedly looked down and not at the road, glancing up a half second before the car hit Elaine Herzberg, 49, who was crossing the street at night.

The report said police concluded the crash, which has dealt Uber Technologies Inc a major setback in its efforts to develop self-driving cars, would have been “entirely avoidable” if Vasquez had been paying attention.

Vasquez could face charges of vehicular manslaughter, according to the report, which was released late on Thursday in response to a public records request.

She could not immediately be reached for comment and Reuters could not locate her attorney.

Police obtained records from Hulu, an online service for streaming TV shows and movies, which showed Vasquez’s account was playing the TV talent show “The Voice” for about 42 minutes on the night of the crash, ending at 9:59 p.m., which “coincides with the approximate time of the collision,” the report said.

Police submitted their findings to local prosecutors, who will make a determination on whether to file criminal charges. The Maricopa County Attorney’s Office referred the case to the Yavapai County Attorney’s Office because of a conflict.

A spokeswoman for the Yavapai County Attorney’s Office said on Friday that “the matter is still pending review. We do not have a projected timeline for a decision.”

The Uber car was in autonomous mode at the time of the crash, but the company, like other self-driving car developers, requires a back-up driver inside to intervene when the autonomous system fails or a tricky driving situation occurs.

Vasquez looked up just 0.5 seconds before the crash, after keeping her head down for 5.3 seconds, the Tempe police report said. Uber’s self-driving Volvo SUV was traveling at just under 44 miles (71 km) per hour.

“We continue to cooperate fully with ongoing investigations while conducting our own internal safety review,” an Uber spokeswoman said. “We have a strict policy prohibiting mobile device usage for anyone operating our self-driving vehicles. We plan to share more on the changes we’ll make to our program soon.”

Last month, the Uber spokeswoman said the company was undergoing a “top-to-bottom safety review,” and had brought on a former U.S. federal transportation official to help improve its safety culture.

‘VERY SERIOUS CASE’

Police said a review of video from inside the Volvo showed Vasquez was looking down during the trip, and her face “appears to react and show a smirk or laugh at various points during the times that she is looking down.” The report found that Vasquez “was distracted and looking down” for close to seven of the nearly 22 minutes prior to the collision.

Tempe Police Detective Michael McCormick asked Hulu for help in the investigation, writing in a May 10 email to the company that “this is a very serious case where the charges of vehicle manslaughter may be charged, so correctly interpreting the information provided to us is crucial.” Hulu turned over the records on May 31.

According to a report last month by the National Transportation Safety Board, which is also investigating the crash, Vasquez told federal investigators she had been monitoring the self-driving interface in the car and that neither her personal nor business phones were in use until after the crash. That report showed Uber had disabled the emergency braking system in the Volvo, and Vasquez began braking less than a second after hitting Herzberg.

Herzberg, who was homeless, was walking her bicycle across the street, outside of a crosswalk on a four-lane road, when she was struck by the front right side of the Volvo.

FILE PHOTO: U.S. National Transportation Safety Board (NTSB) investigators examine a self-driving Uber vehicle involved in a fatal accident in Tempe, Arizona, U.S., March 20, 2018. A women was struck and killed by the vehicle on March 18, 2018. National Transportation Safety Board/Handout via REUTERS/File Photo

The police report faulted Herzberg for “unlawfully crossing the road at a location other than a marked crosswalk.”

In addition to the report, police released a slew of audio files of 911 calls made by Vasquez, who waited at the scene for police, and bystanders; photographs of Herzberg’s damaged bicycle and the Uber car; and videos from police officers’ body cameras that capture the minutes after the crash, including harrowing screams in the background.

Uber shuttered its autonomous car testing program in Arizona after the incident, and says it plans to begin testing elsewhere this summer, although in some cities it will have to first win over increasingly wary regulators....

Uber car's 'safety' driver streamed TV show before fatal crash: police

The backup "safety driver" in an Uber Technologies self-driving vehicle that killed a pedestrian in March was streaming the popular television show The Voice on her mobile phone in the moments before the crash, according to police in Arizona.

A 318-page report filed by the Tempe Police Department refutes driver Rafaela Vasquez's previous statement to federal safety investigators that she wasn't using her mobile devices when the car struck and killed a woman who was crossing the street at night.

Police were able to obtain records of Vasquez's account from the television streaming service Hulu LLC, which showed she'd streamed the talent show for 42 minutes on the night of the March 18 crash. Her stream ended at 9.59pm, around the same time Elaine Herzberg, 49, was hit by the Uber, which was in self-driving mode, the report said.

Police concluded in the report that the crash was "entirely avoidable" and said Vasquez could face vehicular manslaughter charges.

Emergency Braking Sensors on the Uber Volvo SUV's system had detected Herzberg six seconds before the impact and recognized that it was going to crash 1.3 seconds prior, according to a preliminary report by the National Transportation Safety Board. But the system couldn't activate the brakes because, according to Uber, emergency braking isn't enabled while the vehicle is under computer control, the report said. The responsibility for braking was left up to Vasquez - who didn't look up until 0.5 seconds before the accident.

Vasquez was looking away from the road for long stretches in the time before the crash, according to an internal video showing her that was released by police. She told NTSB investigators she was monitoring the self-driving system's interface. Vasquez couldn't immediately be reached for comment.

A spokesperson for Uber said the company continues to cooperate fully with ongoing investigations while conducting our own internal safety review, adding that Uber policy prohibits mobile device usage for anyone operating its self-driving vehicles.

Related Story Uber halts self-driving car tests after death of pedestrian in Arizona

The fatality prompted the ride-hailing giant to suspend all road testing of self-driving vehicles in areas including Phoenix, Pittsburgh, San Francisco and Toronto. The incident marked the first death involving a fully autonomous car, and sparked concern over the safety of the technology.

Police submitted their findings to county prosecutors, who'll make the determination about whether to bring any charges. The Maricopa County Attorney's Office was last reported to have referred the case to the Yavapai County Attorney's Office....

Uber operator was watching The Voice just before fatal self-driving crash in Tempe, Arizona

The 18th of March 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead. Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes. An array of radar and light-emitting lidar sensors allowed onboard algorithms to calculate that, given their host vehicle’s steady speed of 43mph, the object was six seconds away – assuming it remained stationary. But objects in roads seldom remain stationary, so more algorithms crawled a database of recognizable mechanical and biological entities, searching for a fit from which this one’s likely behavior could be inferred.

At first the computer drew a blank; seconds later, it decided it was dealing with another car, expecting it to drive away and require no special action. Only at the last second was a clear identification found – a woman with a bike, shopping bags hanging confusingly from handlebars, doubtless assuming the Volvo would route around her as any ordinary vehicle would. Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention. Elaine Herzberg, aged 49, was struck and killed, leaving more reflective members of the tech community with two uncomfortable questions: was this algorithmic tragedy inevitable? And how used to such incidents would we, should we, be prepared to get?

“In some ways we’ve lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.”

If these words sound shocking, they should, not least because Ellen Ullman, in addition to having been a distinguished professional programmer since the 1970s, is one of the few people to write revealingly about the process of coding. There’s not much she doesn’t know about software in the wild.

“People say, ‘Well, what about Facebook – they create and use algorithms and they can change them.’ But that’s not how it works. They set the algorithms off and they learn and change and run themselves. Facebook intervene in their running periodically, but they really don’t control them. And particular programs don’t just run on their own, they call on libraries, deep operating systems and so on ...”

What is an algorithm?

Few subjects are more constantly or fervidly discussed right now than algorithms. But what is an algorithm? In fact, the usage has changed in interesting ways since the rise of the internet – and search engines in particular – in the mid-1990s. At root, an algorithm is a small, simple thing; a rule used to automate the treatment of a piece of data. If a happens, then do b; if not, then do c. This is the “if/then/else” logic of classical computing. If a user claims to be 18, allow them into the website; if not, print “Sorry, you must be 18 to enter”. At core, computer programs are bundles of such algorithms. Recipes for treating data. On the micro level, nothing could be simpler. If computers appear to be performing magic, it’s because they are fast, not intelligent.

Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”). This has revolutionized areas of medicine, science, transport, communication, making it easy to understand the utopian view of computing that held sway for many years. Algorithms have made our lives better in myriad ways.

Only since 2016 has a more nuanced consideration of our new algorithmic reality begun to take shape. If we tend to discuss algorithms in almost biblical terms, as independent entities with lives of their own, it’s because we have been encouraged to think of them in this way. Corporations like Facebook and Google have sold and defended their algorithms on the promise of objectivity, an ability to weigh a set of conditions with mathematical detachment and absence of fuzzy emotion. No wonder such algorithmic decision-making has spread to the granting of loans/ bail/benefits/college places/job interviews and almost anything requiring choice.

We no longer accept the sales pitch for this type of algorithm so meekly. In her 2016 book Weapons of Math Destruction, Cathy O’Neil, a former math prodigy who left Wall Street to teach and write and run the excellent mathbabe blog, demonstrated beyond question that, far from eradicating human biases, algorithms could magnify and entrench them. After all, software is written by overwhelmingly affluent white and Asian men – and it will inevitably reflect their assumptions (Google “racist soap dispenser” to see how this plays out in even mundane real-world situations). Bias doesn’t require malice to become harm, and unlike a human being, we can’t easily ask an algorithmic gatekeeper to explain its decision. O’Neil called for “algorithmic audits” of any systems directly affecting the public, a sensible idea that the tech industry will fight tooth and nail, because algorithms are what the companies sell; the last thing they will volunteer is transparency.

The good news is that this battle is under way. The bad news is that it’s already looking quaint in relation to what comes next. So much attention has been focused on the distant promises and threats of artificial intelligence, AI, that almost no one has noticed us moving into a new phase of the algorithmic revolution that could be just as fraught and disorienting – with barely a question asked.

The algorithms flagged by O’Neil and others are opaque but predictable: they do what they’ve been programmed to do. A skilled coder can in principle examine and challenge their underpinnings. Some of us dream of a citizen army to do this work, similar to the network of amateur astronomers who support professionals in that field. Legislation to enable this seems inevitable.

We might call these algorithms “dumb”, in the sense that they’re doing their jobs according to parameters defined by humans. The quality of result depends on the thought and skill with which they were programmed. At the other end of the spectrum is the more or less distant dream of human-like artificial general intelligence, or AGI. A properly intelligent machine would be able to question the quality of its own calculations, based on something like our own intuition (which we might think of as a broad accumulation of experience and knowledge). To put this into perspective, Google’s DeepMind division has been justly lauded for creating a program capable of mastering arcade games, starting with nothing more than an instruction to aim for the highest possible score. This technique is called “reinforcement learning” and works because a computer can play millions of games quickly in order to learn what generates points. Some call this form of ability “artificial narrow intelligence”, but here the word “intelligent” is being used much as Facebook uses “friend” – to imply something safer and better understood than it is. Why? Because the machine has no context for what it’s doing and can’t do anything else. Neither, crucially, can it transfer knowledge from one game to the next (so-called “transfer learning”), which makes it less generally intelligent than a toddler, or even a cuttlefish. We might as well call an oil derrick or an aphid “intelligent”. Computers are already vastly superior to us at certain specialized tasks, but the day they rival our general ability is probably some way off – if it ever happens. Human beings may not be best at much, but we’re second-best at an impressive range of things.

Here’s the problem. Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.

Clashing codes

These algorithms are not new in themselves. I first encountered them almost five years ago while researching a piece for the Guardian about high frequency trading (HFT) on the stock market. What I found was extraordinary: a human-made digital ecosystem, distributed among racks of black boxes crouched like ninjas in billion-dollar data farms – which is what stock markets had become. Where once there had been a physical trading floor, all action had devolved to a central server, in which nimble, predatory algorithms fed off lumbering institutional ones, tempting them to sell lower and buy higher by fooling them as to the state of the market. Human HFT traders (although no human actively traded any more) called these large, slow participants “whales”, and they mostly belonged to mutual and pension funds – ie the public. For most HFT shops, whales were now the main profit source. In essence, these algorithms were trying to outwit each other; they were doing invisible battle at the speed of light, placing and cancelling the same order 10,000 times per second or slamming so many into the system that the whole market shook – all beyond the oversight or control of humans.

No one could be surprised that this situation was unstable. A “flash crash” had occurred in 2010, during which the market went into freefall for five traumatic minutes, then righted itself over another five – for no apparent reason. I travelled to Chicago to see a man named Eric Hunsader, whose prodigious programming skills allowed him to see market data in far more detail than regulators, and he showed me that by 2014, “mini flash crashes” were happening every week. Even he couldn’t prove exactly why, but he and his staff had begun to name some of the “algos” they saw, much as crop circle hunters named the formations found in English summer fields, dubbing them “Wild Thing”, “Zuma”, “The Click” or “Disruptor”.

Neil Johnson, a physicist specializing in complexity at George Washington University, made a study of stock market volatility. “It’s fascinating,” he told me. “I mean, people have talked about the ecology of computer systems for years in a vague sense, in terms of worm viruses and so on. But here’s a real working system that we can study. The bigger issue is that we don’t know how it’s working or what it could give rise to. And the attitude seems to be ‘out of sight, out of mind’.”

Significantly, Johnson’s paper on the subject was published in the journal Nature and described the stock market in terms of “an abrupt system-wide transition from a mixed human-machine phase to a new all-machine phase characterized by frequent black swan [ie highly unusual] events with ultrafast durations”. The scenario was complicated, according to the science historian George Dyson, by the fact that some HFT firms were allowing the algos to learn – “just letting the black box try different things, with small amounts of money, and if it works, reinforce those rules. We know that’s been done. Then you actually have rules where nobody knows what the rules are: the algorithms create their own rules – you let them evolve the same way nature evolves organisms.” Non-finance industry observers began to postulate a catastrophic global “splash crash”, while the fastest-growing area of the market became (and remains) instruments that profit from volatility. In his 2011 novel The Fear Index, Robert Harris imagines the emergence of AGI – of the Singularity, no less – from precisely this digital ooze. To my surprise, no scientist I spoke to would categorically rule out such a possibility.

All of which could be dismissed as high finance arcana, were it not for a simple fact. Wisdom used to hold that technology was adopted first by the porn industry, then by everyone else. But the 21st century’s porn is finance, so when I thought I saw signs of HFT-like algorithms causing problems elsewhere, I called Neil Johnson again.

“You’re right on point,” he told me: a new form of algorithm is moving into the world, which has “the capability to rewrite bits of its own code”, at which point it becomes like “a genetic algorithm”. He thinks he saw evidence of them on fact-finding forays into Facebook (“I’ve had my accounts attacked four times,” he adds). If so, algorithms are jousting there, and adapting, as on the stock market. “After all, Facebook is just one big algorithm,” Johnson says.

“And I think that’s exactly the issue Facebook has. They can have simple algorithms to recognize my face in a photo on someone else’s page, take the data from my profile and link us together. That’s a very simple concrete algorithm. But the question is what is the effect of billions of such algorithms working together at the macro level? You can’t predict the learned behavior at the level of the population from microscopic rules. So Facebook would claim that they know exactly what’s going on at the micro level, and they’d probably be right. But what happens at the level of the population? That’s the issue.”

To underscore this point, Johnson and a team of colleagues from the University of Miami and Notre Dame produced a paper, Emergence of Extreme Subpopulations from Common Information and Likely Enhancement from Future Bonding Algorithms, purporting to mathematically prove that attempts to connect people on social media inevitably polarize society as a whole. He thinks Facebook and others should model (or be made to model) the effects of their algorithms in the way climate scientists model climate change or weather patterns.

O’Neil says she consciously excluded this adaptive form of algorithm from Weapons of Math Destruction. In a convoluted algorithmic environment where nothing is clear, apportioning responsibility to particular segments of code becomes extremely difficult. This makes them easier to ignore or dismiss, because they and their precise effects are harder to identify, she explains, before advising that if I want to see them in the wild, I should ask what a flash crash on Amazon might look like.

“I’ve been looking out for these algorithms, too,” she says, “and I’d been thinking: ‘Oh, big data hasn’t gotten there yet.’ But more recently a friend who’s a bookseller on Amazon has been telling me how crazy the pricing situation there has become for people like him. Every so often you will see somebody tweet ‘Hey, you can buy a luxury yarn on Amazon for $40,000.’ And whenever I hear that kind of thing, I think: ‘Ah! That must be the equivalent of a flash crash!’”

Anecdotal evidence of anomalous events on Amazon is plentiful, in the form of threads from bemused sellers, and at least one academic paper from 2016, which claims: “Examples have emerged of cases where competing pieces of algorithmic pricing software interacted in unexpected ways and produced unpredictable prices, as well as cases where algorithms were intentionally designed to implement price fixing.” The problem, again, is how to apportion responsibility in a chaotic algorithmic environment where simple cause and effect either doesn’t apply or is nearly impossible to trace. As in finance, deniability is baked into the system.

Real-life dangers

Where safety is at stake, this really matters. When a driver ran off the road and was killed in a Toyota Camry after appearing to accelerate wildly for no obvious reason, Nasa experts spent six months examining the millions of lines of code in its operating system, without finding evidence for what the driver’s family believed had occurred, but the manufacturer steadfastly denied – that the car had accelerated of its own accord. Only when a pair of embedded software experts spent 20 months digging into the code were they able to prove the family’s case, revealing a twisted mass of what programmers call “spaghetti code”, full of algorithms that jostled and fought, generating anomalous, unpredictable output. The autonomous cars currently being tested may contain 100m lines of code and, given that no programmer can anticipate all possible circumstances on a real-world road, they have to learn and receive constant updates. How do we avoid clashes in such a fluid code milieu, not least when the algorithms may also have to defend themselves from hackers?

Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.

“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.

“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”

Unlike our old electro-mechanical systems, these new algorithms are also impossible to test exhaustively. Unless and until we have super-intelligent machines to do this for us, we’re going to be walking a tightrope.

Dyson questions whether we will ever have self-driving cars roaming freely through city streets, while Toby Walsh, a professor of artificial intelligence at the University of New South Wales who wrote his first program at age 13 and ran a tyro computing business by his late teens, explains from a technical perspective why this is.

“No one knows how to write a piece of code to recognize a stop sign. We spent years trying to do that kind of thing in AI – and failed! It was rather stalled by our stupidity, because we weren’t smart enough to learn how to break the problem down. You discover when you program that you have to learn how to break the problem down into simple enough parts that each can correspond to a computer instruction [to the machine]. We just don’t know how to do that for a very complex problem like identifying a stop sign or translating a sentence from English to Russian – it’s beyond our capability. All we know is how to write a more general purpose algorithm that can learn how to do that given enough examples.”

Hence the current emphasis on machine learning. We now know that Herzberg, the pedestrian killed by an automated Uber car in Arizona, died because the algorithms wavered in correctly categorizing her. Was this a result of poor programming, insufficient algorithmic training or a hubristic refusal to appreciate the limits of our technology? The real problem is that we may never know.

“And we will eventually give up writing algorithms altogether,” Walsh continues, “because the machines will be able to do it far better than we ever could. Software engineering is in that sense perhaps a dying profession. It’s going to be taken over by machines that will be far better at doing it than we are.”

Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs.

“Where there are choices to be made, that’s where ethics comes in. And we tend to want to have an agency that we can interrogate or blame, which is very difficult to do with an algorithm. This is one of the criticisms of these systems so far, in that it’s not possible to go back and analyze exactly why some decisions are made, because the internal number of choices is so large that how we got to that point may not be something we can ever recreateto prove culpability beyond doubt.”

The counter-argument is that, once a program has slipped up, the entire population of programs can be rewritten or updated so it doesn’t happen again – unlike humans, whose propensity to repeat mistakes will doubtless fascinate intelligent machines of the future. Nonetheless, while automation should be safer in the long run, our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable. In an algorithmic environment, many unexpected outcomes may not have been foreseeable to humans – a feature with the potential to become a scoundrel’s charter, in which deliberate obfuscation becomes at once easier and more rewarding. Pharmaceutical companies have benefited from the cover of complexity for years (see the case of Thalidomide), but here the consequences could be both greater and harder to reverse.

The military stakes

Commerce, social media, finance and transport may come to look like small beer in future, however. If the military no longer drives innovation as it once did, it remains tech’s most consequential adopter. No surprise, then, that an outpouring of concern among scientists and tech workers has accompanied revelations that autonomous weapons are ghosting toward the battlefield in what amounts to an algorithmic arms race. A robotic sharpshooter currently polices the demilitarized zone between North and South Korea, and while its manufacturer, Samsung, denies it to be capable of autonomy, this claim is widely disbelieved. Russia, China and the US all claim to be at various stages of developing swarms of coordinated, weaponized drones , while the latter plans missiles able to hover over a battlefield for days, observing, before selecting their own targets. A group of Google employees resigned over and thousands more questioned the tech monolith’s provision of machine learning software to the Pentagon’s Project Maven “algorithmic warfare” program – concerns to which management eventually responded, agreeing not to renew the Maven contract and to publish a code of ethics for the use of its algorithms. At time of writing, competitors including Amazon and Microsoft have resisted following suit.

In common with other tech firms, Google had claimed moral virtue for its Maven software: that it would help choose targets more efficiently and thereby save lives. The question is how tech managers can presume to know what their algorithms will do or be directed to do in situ – especially given the certainty that all sides will develop adaptive algorithmic counter-systems designed to confuse enemy weapons. As in the stock market, unpredictability is likely to be seen as an asset rather than handicap, giving weapons a better chance of resisting attempts to subvert them. In this and other ways we risk in effect turning our machines inside out, wrapping our everyday corporeal world in spaghetti code.

Lucy Suchman of Lancaster University in the UK co-authored an open letter from technology researchers to Google, asking them to reflect on the rush to militarize their work. Tech firms’ motivations are easy to fathom, she says: military contracts have always been lucrative. For the Pentagon’s part, a vast network of sensors and surveillance systems has run ahead of any ability to use the screeds of data so acquired.

“They are overwhelmed by data, because they have new means to collect and store it, but they can’t process it. So it’s basically useless – unless something magical happens. And I think their recruitment of big data companies is a form of magical thinking in the sense of: ‘Here is some magic technology that will make sense of all this.’”

Suchman also offers statistics that shed chilling light on Maven. According to analysis carried out on drone attacks in Pakistan from 2003-13, fewer than 2% of people killed in this way are confirmable as “high value” targets presenting a clear threat to the United States. In the region of 20% are held to be non-combatants, leaving more than 75% unknown. Even if these figures were out by a factor of two – or three, or four – they would give any reasonable person pause.

“So here we have this very crude technology of identification and what Project Maven proposes to do is automate that. At which point it becomes even less accountable and open to questioning. It’s a really bad idea.”

Suchman’s colleague Lilly Irani, at the University of California, San Diego, reminds us that information travels around an algorithmic system at the speed of light, free of human oversight. Technical discussions are often used as a smokescreen to avoid responsibility, she suggests.

“When we talk about algorithms, sometimes what we’re talking about is bureaucracy. The choices algorithm designers and policy experts make are presented as objective, where in the past someone would have had to take responsibility for them. Tech companies say they’re only improving accuracy with Maven – ie the right people will be killed rather than the wrong ones – and in saying that, the political assumption that those people on the other side of the world are more killable, and that the US military gets to define what suspicion looks like, go unchallenged. So technology questions are being used to close off some things that are actually political questions. The choice to use algorithms to automate certain kinds of decisions is political too.”

The legal conventions of modern warfare, imperfect as they might be, assume human accountability for decisions taken. At the very least, algorithmic warfare muddies the water in ways we may grow to regret. A group of government experts is debating the issue at the UN convention on certain conventional weapons (CCW) meeting in Geneva this week.

Searching for a solution

Solutions exist or can be found for most of the problems described here, but not without incentivizing big tech to place the health of society on a par with their bottom lines. More serious in the long term is growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on. One solution, employed by the Federal Aviation Authority in relation to commercial aviation, is to log and assess the content of all programs and subsequent updates to such a level of detail that algorithmic interactions are well understood in advance – but this is impractical on a large scale. Portions of the aerospace industry employ a relatively new approach called model-based programming, in which machines do most of the coding work and are able to test as they go.

Model-based programming may not be the panacea some hope for, however. Not only does it push humans yet further from the process, but Johnson, the physicist, conducted a study for the Department of Defense that found “extreme behaviors that couldn’t be deduced from the code itself” even in large, complex systems built using this technique. Much energy is being directed at finding ways to trace unexpected algorithmic behavior back to the specific lines of code that caused it. No one knows if a solution (or solutions) will be found, but none are likely to work where aggressive algos are designed to clash and/or adapt.

As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”. More practically, Spafford, the software security expert, advises making tech companies responsible for the actions of their products, whether specific lines of rogue code – or proof of negligence in relation to them – can be identified or not. He notes that the venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work. Johnson, for his part, considers our algorithmic discomfort to be at least partly conceptual; growing pains in a new realm of human experience. He laughs in noting that when he and I last spoke about this stuff a few short years ago, my questions were niche concerns, restricted to a few people who pored over the stock market in unseemly detail.

“And now, here we are – it’s even affecting elections. I mean, what the heck is going on? I think the deep scientific thing is that software engineers are trained to write programs to do things that optimize – and with good reason, because you’re often optimizing in relation to things like the weight distribution in a plane, or a most fuel-efficient speed: in the usual, anticipated circumstances optimizing makes sense. But in unusual circumstances it doesn’t, and we need to ask: ‘What’s the worst thing that could happen in this algorithm once it starts interacting with others?’ The problem is we don’t even have a word for this concept, much less a science to study it.”

He pauses for moment, trying to wrap his brain around the problem.

“The thing is, optimizing is all about either maximizing or minimizing something, which in computer terms are the same. So what is the opposite of an optimization, ie the least optimal case, and how do we identify and measure it? The question we need to ask, which we never do, is: ‘What’s the most extreme possible behavior in a system I thought I was optimizing?’”

Another brief silence ends with a hint of surprise in his voice.

“Basically, we need a new science,” he says....

Franken-algorithms: The Deadly Consequences of Unpredictable Code

The self-driving Uber car that hit and killed a woman walking her bike across a street wasn’t designed to detect “jaywalking pedestrians.”

That's according to an official dossier published by the US National Safety Transportation Board (NTSB) on Tuesday.

The March 2018 accident was the first recorded death by a fully autonomous vehicle. On-board video footage showed the victim, 49-year-old Elaine Herzberg, pushing her bike at night across a road in Tempe, Arizona, moments before she was struck by the AI-powered SUV at 39 MPH.

Now, an investigation by the NTSB into the crash has pinpointed a likely major contributing factor: the code couldn't recognize her as a pedestrian, because she was not at an obvious designated crossing. Rather than correctly anticipating her movements as a person moving across the road, it ended up running right into her.

“The system design did not include a consideration for jaywalking pedestrians,” the watchdog stated [PDF] in its write-up. “Instead, the system had initially classified her as an 'other' object which are not assigned goals.”

The computer-vision systems in self-driving cars are trained to identify things, such as other vehicles, trees, sign posts, bicycles, and so on, and make decisions on what to do next using that information. It appears Uber’s software wasn’t able to identify Herzberg since there was no classification label for a person not using a proper crossing point, and it wasn't able to make the right decisions.

Countdown to impact

Some 5.6 seconds before hitting her, the car's radar detected Herzberg, and at 5.2 seconds, she was picked out by the Lidar. However, the machine-learning system more or less ignored her, figuring her to be a non-moving object not in the vehicle's way.

As the robo-vehicle drew nearer, it categorized her variously as a vehicle, a bike, or some other thing that was not, or was only partially, in its way.

Just 1.2 seconds before hitting her, it identified her not only as a bicycle but also clearly in the path of its travel, by which point it was far too late to change course.

Crucially, the software was "unable to correctly predict the path" of Herzberg, the report noted. Perhaps if it had correctly identified her early on as a person crossing the street, it could have dramatically slowed down or otherwise tried to avoid her. But it did not anticipate the collision because, for the most part, it considered her to be some unknown object not in the way.

Also, don't forget: the SUV's emergency braking system was deliberately disabled because when it was switched on, the vehicle would act erratically, according to Uber. The software biz previously said “the vehicle operator is relied on to intervene and take action," in an emergency.

The self-driving car was fully autonomous at the time at the accident, though it had a human driver at the wheel. An internal camera caught the Uber worker looking down and away from the road moments before the accident, unaware of Herzberg’s presence before it was too late.

Below is a timetable, produced by the NTSB, detailing the car's decision-making and speed in the seconds before the accident:

NTSB table of events in Uber crash

NTSB table of events in Uber crash

The report comes just two weeks before the board is due to hold a public meeting to “determine the probable cause of a crash involving a pedestrian and an Uber test vehicle,” on 19 November.

The safety board also released more than 40 documents totaling at least 430 pages with various bits of supporting evidence and comprehensive accounts into things like the vehicle’s properties and the internal safety culture at Uber.

The massive data dump also revealed that the ride-hailing biz's self-driving cars were involved in 37 smashes between September 2016 and March 2018, prior to the deadly accident. In these 37 incidents, all of the robo-vehicles were driving in autonomous mode, and in 33 of these cases, other vehicles crashed into the self-driving cars.

'Tragedy'

On Tuesday evening, a spokesperson for Uber told us: “We regret the March 2018 crash involving one of our self-driving vehicles that took Elaine Herzberg’s life. In the wake of this tragedy, the team at Uber Advanced Technologies Group has adopted critical program improvements to further prioritize safety.

"We deeply value the thoroughness of the NTSB’s investigation into the crash and look forward to reviewing their recommendations once issued after the NTSB’s board meeting later this month.”

El Reg asked Uber how it planned to train its autonomous vehicles to detect jaywalkers in future, and the biz told us: "We conduct hundreds of tests in simulation and on our test track to help us improve the overall safety of our system. These tests include taking our software and having it go through scenarios that involve complex situations like jaywalking, hard braking, and unprotected turns." ®...

Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers

RAFAELA VASQUEZ LIKED to work nights, alone, buffered from a world she had her reasons to distrust. One Sunday night in March 2018, Uber assigned her the Scottsdale loop. She drove a gray Volvo SUV, rigged up with cameras and lidar sensors, through the company’s garage, past the rows of identical cars, past a poster depicting a driver staring down at a cell phone that warned, “It Can Wait.” The clock ticked past 9:15, and Vasquez reached the route’s entry point. She flipped the Volvo into autonomous mode, and the car navigated itself through a blur of suburban Arizona, past auto dealers and Zorba’s Adult Shop and the check-cashing place and McDonald’s. Then it jagged a short stint through Tempe to start the circuit again. It was a route Vasquez had cruised in autonomy some 70 times before.

As she was finishing her second loop, the Volvo blazed across a bridge strung with bistro lights above Tempe Town Lake. Neon signs on glass office buildings were reflected in the water, displaying the area’s tech hub ambitions—Zenefits, NortonLifeLock, Silicon Valley Bank. Beyond the bridge, the car navigated a soft bend into the shadows under a freeway overpass. At 9:58 pm, it glided to a forlorn stretch of road between a landscaped median and a patch of desert scruff. Four signs in the median warned people not to jaywalk there, directing them to a crosswalk 380 feet away.

The Uber driving system—which had been in full control of the car for 19 minutes at that point—registered a vehicle ahead that was 5.6 seconds away, but it delivered no alert to Vasquez. Then the computer nixed its initial assessment; it didn’t know what the object was. Then it switched the classification back to a vehicle, then waffled between vehicle and “other.” At 2.6 seconds from the object, the system identified it as “bicycle.” At 1.5 seconds, it switched back to considering it “other.” Then back to “bicycle” again. The system generated a plan to try to steer around whatever it was, but decided it couldn’t. Then, at 0.2 seconds to impact, the car let out a sound to alert Vasquez that the vehicle was going to slow down. At two-hundredths of a second before impact, traveling at 39 mph, Vasquez grabbed the steering wheel, which wrested the car out of autonomy and into manual mode.

It was too late. The smashed bike scraped a 25-foot wake on the pavement. A person lay crumpled in the road.

Vasquez did what Uber had taught its employees in the test program to do in case of emergencies: She pulled the vehicle over and called 911. “A bicyclist, um, I, um, hit a bicyclist that was in the road,” she told the dispatcher, her voice tense. “They shot out in the street … They are injured, they need help, paramedics.”

“I know it’s pretty scary,” the dispatcher said in soothing tones. She told Vasquez to breathe. Within six minutes of the crash, cops started to arrive. Paramedics too. One cop scanned a flashlight over the person on the ground. A paramedic kneeled down and pumped the victim’s chest.

A couple of minutes later, an officer walked up to the Volvo, where Vasquez sat behind the wheel. He asked if she was OK. “Yeah, I’m just shaken up,” Vasquez said. “Is the person OK? Are they badly hurt?” Back by the figure who lay on the ground, a woman began wailing. Vasquez asked the officer, “Is that the person screaming?” He answered: “No no, that’s probably some people that they know.”

For the next two hours, Vasquez waited, doing what the police asked. Uber reps arrived. In the early minutes after the crash, one jogged up to Vasquez’s car, and an officer asked him to let the cops talk to her first. Eventually Vasquez moved to sit in a supervisor’s car. She asked for updates about the victim. And she learned that the person with the bicycle had died.

After midnight, Officer Kyle Loehr approached Vasquez and asked if she was the driver in the crash. “I’m the operator,” she said. He asked her to get out of the car, and, body camera running, explained that he was going to run her through some sobriety tests: “This protects you, protects the city, protects the company,” he said. “It’s just literally a box we need to check.” Vasquez tracked Loehr’s green flashlight with her eyes, then his finger, then looked up to the sky and told him when she thought 30 seconds had passed. Sober. About 10 minutes later, Loehr came back with more questions. His voice was congenial and chipper. “I’m trying to just lighten the mood a little bit,” he said at one point, “because I know it’s stressful, and it’s crappy.” He told her he had to read her Miranda rights to her. That’s what happens, he added, when someone is no longer allowed to leave a scene. Gently, he went on: “Let me walk you through what happens with any of these cases when there’s a fatality.”

“Oh God,” Vasquez whispered. “That word.”

Multiple blunt-force injuries. That’s what the medical examiner would put down as Elaine Herzberg’s cause of death. Manner of death: accident. Herzberg had lived in Arizona her whole life and had resorted to camping in the streets near Tempe. The 49-year-old often carried a radio playing the local rock station; she collected frog mementos and colored to relax. She had struggled with addiction. That March night, she became the world’s first pedestrian killed by a self-driving car.

Herzberg’s death is the kind of tragedy the autonomous driving industry claims it can prevent. In the US, car accidents kill more than 38,000 people a year, more than 90 percent of them at least in part due to human error. By taking sleepiness, inattention, drunkenness, and rage out of the equation and replacing them with vigilant, precise technology, self-driving cars promise to make the roads dramatically safer. But to reach that purported future, we must first weather the era we’re in now: when tech is a student driver. That means gangly fleets of sensor-bedecked cars sucking in data on millions of miles of public roads, learning to react to our flawed and improvisational ways. And inevitably, as experts have always warned, that means crashes.

Questions of fault when things go wrong have been settled over a century for human driving. But they are still largely the stuff of thought experiments for the cyborgs now roving our streets: vehicles controlled by a machine brain, programmed by human engineers, and usually overseen on the road by some other person behind the wheel. For years, researchers and self-driving advocates had anxiously prognosticated about how the public and the legal system would react to the first pedestrian death caused by a self-driving car.

The crash in Tempe ripped those musings into reality—forcing police, prosecutors, Uber, and Vasquez into roles both unwanted and unprecedented in a matter of seconds. At the scene that night, Vasquez stood at the center of a tragedy and a conundrum. She couldn’t yet fathom the part she was about to play in sorting out where the duties of companies and states and engineers end, and the mandate of the person inside the car begins.

“I’m sick over what happened,” Vasquez confided to the police as her mind spun in the hours after the crash. She said she felt awful for the victim’s family. She also grieved the event in a different way—as a loyal foot soldier of the self-driving revolution. “Oh God, this is going to be a setback for the whole industry,” Vasquez told Loehr. “Which is not what I want.”

At the time, Vasquez was an Uber defender. She had come a long way to this job. Over the previous few years, she’d acquired a dizzying track record of doing hidden work for highly visible companies—moderating grisly posts on Facebook, she says; tweeting about Dancing With the Stars from ABC’s Twitter; policing social media for Wingstop and Walmart. But her position with Uber’s Advanced Technologies Group had offered new stability, and after years of turmoil as a transgender woman navigating a hostile society, she was careful not to jeopardize it. Vasquez had even removed the box braids of colorful yarn that had defined her look since she was young. At a new job, she had come to think, “the less attention I bring to myself, the better.” During her nine months of work as an operator, the viselike grip of everything she’d endured as a child and teen and adult had slackened just a bit. As she trudged into her forties, Vasquez had felt her life, finally, relaxing into a kind of equilibrium.

Now, as she and Loehr sat in a victim services van near the Tempe bridge after midnight, grappling with Herzberg’s death, the vise was tightening again. She found herself asking, “Do I need a lawyer?”

ARIZONA WELCOMED UBER’S self-driving program to Tempe with feisty, high-profile panache, after a long courtship. Business-­boosting governor Doug Ducey, the former CEO of Cold Stone Creamery, took office in 2015, promising to yank the state out of its post-­recession doldrums. He wanted to lure Silicon Valley companies over the Arizona border, pitching his state as the anti-California with trollish flamboyance. He axed restrictions on Theranos blood testing, welcomed an Apple data center, ended local bans on Airbnb, and pressured officials to let Ubers and Lyfts roll up to Phoenix’s largest airport.

Along the way, Ducey’s office and Uber entered a mutual embrace. At one point, a Ducey staffer emailed Uber and referred to a 2015 Arizona law that regulated ride-sharing as “your bill.” At times, Uber suggested tweets for the governor’s office account and talking points for press events. In June 2015, Uber opened a customer service center and pledged to hire 300 Arizonans. And in August, Ducey signed an exuberant executive order allowing companies to test self-driving vehicles on public roads.

All of that was good for Uber. At the time, its CEO, Travis Kalanick, saw the development of robotaxis as an existential battle, particularly with Google. The company had to at least tie for first place in the autonomy race, or else, he said in an interview, “Uber is no longer a thing.” Human drivers could never compete on cost. But by the beginning of 2015, Kalanick’s company was way behind. So Uber poached 40 experts from Carnegie Mellon’s robotics department to create something called the Uber Advanced Technologies Group and tasked it with turning the company into a self-­driving force. Uber shoveled hundreds of millions of dollars into the self-driving unit, which would, over the next three years, grow to more than 1,000 employees across five cities.

In 2016, with Google’s Waymo and GM’s Cruise already piloting prototype self-driving cars around the Phoenix area, Ducey spotted yet another way to remake his state as the nation’s self-driving test capital. That December, California revoked the registrations on Uber’s test cars after the company refused to get a testing permit. Within hours Ducey tweeted, “California may not want you; but AZ does!” The next day, Uber’s Volvos were loaded onto semitrailers bound for Arizona. At the time, federal regulators were standing back, suggesting that companies voluntarily report their safety practices, and recommending states do the same.

In a secretive industry, miles driven in autonomous mode were a key signal of a program’s vitality. So throughout 2017, as Arizona became the largest site for Uber’s testing, employees recall company leaders demanding that the operators “crush miles”—hundreds of thousands, then millions, of them. “These were pretty purposely outrageous goals,” says Jonathan Barentine, a former employee who trained the human backup operators. “We were trying to ramp up really quickly, which at the time was what Uber was good at—or able to do.” By late 2017, Uber boasted that it was racking up 84,000 miles a week.

Soon Uber was running 40 cars across thousands of Arizona miles on up to eight shifts a day, with human pilots rescuing the fledgling robots when they went awry, and regulators barely watching. When Arizona welcomed Uber’s audacious program, Bryant Walker Smith, a leading scholar of self-driving policy, told the San Jose Mercury News that Ducey would symbolically “own” the company’s self-driving future—whether that be success or a high-profile crash. In California, Smith had recommended to the state’s officials that they revoke Uber’s registration; as for Arizona’s quick embrace of the same program, he warned, “There are risks to that level of permissiveness.”

OVER THE COURSE of 2017, the Advanced Technologies Group brought on hundreds of test operators in Arizona. Jonathan Barentine, a friendly and precocious program manager who was just a few years out of studying liberal arts at Cornell, was posted in Tempe to oversee training for the new recruits. He remembers that Vasquez, hired that summer, took the training so seriously she appeared stressed. “It seemed like a bit of a big break for her,” he says. “She really cared about making sure that she could do her job.”

For many of the new operators, coming off work on cleaning crews or as delivery drivers or regular Uber drivers, walking into the Advanced Technologies Group’s Tempe headquarters was like entering a Silicon Valley Shangri-la. The facility came with an alluring nickname—Ghost Town—from its days when few employees reported to the sprawling office-park building. The name had stuck even as exploding ranks of workers dropped in for car assignments, free catered meals, and a break room packed with Red Bull and snacks. The operators earned full benefits and about $20 to $24 an hour—solidly middle-class wages in the area. Vasquez’s coworkers were buying houses and booking vacations. Workers marveled at the latitude and trust of the company’s culture: Everyone gave feedback at regular debriefs, and managers let workers take breaks as needed to stay sharp on the road. Some stayed after hours to play video games. And they worked at the vanguard of tech. Flavio Beltran, an operator in the Tempe program, says, “I felt like, wow, I’m a part of history. I felt a very huge sense of pride.”

Vasquez, for her part, was fairly subdued. She says the mix of solitary work with a few interactions suited her. While she started to count a couple of colleagues as friends, she primarily seemed engaged with the work. A supervisor says Vasquez would walk up to her manager’s desk to report a new tidbit about the cars or make suggestions. She got a bonus for her performance in late 2017.

In the first months of testing on the roads, two people would work together in the car. Ideally, the person in the left seat—the driver’s side—called out things like obstacles and traffic signs: Do we see the bicyclist ahead? The pedestrian on the left? This stop sign? The person in the right seat would confirm on a laptop whether the system detected it: Check. Check. Check. If there was a hiccup, the person in the driver’s seat could take control of the car. The other person would write up the issue for the company to review.

In the fall of 2017, just a few months after Kalanick was ousted as CEO, Uber announced that it was changing the plan. Now there’d be just one operator in each car. Some heard this was because the tech was getting better. But with the self-driving unit chewing through hundreds of millions of dollars a year, others at the Advanced Technologies Group heard Uber wanted to stretch labor costs across more miles. (Uber says cost was not a factor in its decision.)

Barentine lurched to retrain the workers to manage the cars alone. Typically, he says, a solo driver would be used only to test more mature versions of the software, in part to minimize the number of times the human had to take over from the vehicle. Now feedback on the vehicle’s performance en route was to be entered via some buttons on a tablet mounted on the dashboard. A few operators told me they had to get used to handling the car alone, for hours, with no conversation mate to spice up the repetitive loops. Combined with the sheer number of miles they were racking up, the change also worried Barentine. “All my colleagues in learning development were very uneasy,” he says.

Without a second set of eyes in the car for long stretches in autonomous mode, the workers also found it harder to resist the forbidden lure of using their phones.

On the very first day that he was in the car alone, Adam Caplinger, an operator in Pittsburgh, where Uber was also testing self-driving vehicles, snapped a photo at a red light. He self-­reported the transgression. Managers later showed him video from the car’s dashcam. As the car kept driving, he’d continued typing on his phone, a moment that Caplinger hadn’t even remembered. “I felt sick in my stomach,” he says. “My eyes did go to my phone a lot more than I realized.” Management told him they had to set an example and fired him.

Even the guy who designed that “It Can Wait” poster—the one that hung around Ghost Town reminding operators not to pick up their phones—ran afoul of the rule. In early 2018, after he’d logged thousands of autonomous miles, Flavio Beltran spotted a plane’s contrail—and snapped a photo, just as an operator in another car passed, looking at him. “I was like, ‘Aw man, fuck,’” Beltran says.

Management urged operators to report coworkers who broke the rules. (“Ninety-nine percent of the team wanted the program to continue and were trying to preserve it,” a supervisor says.) Tempe managers also did occasional retroactive spot checks, pulling the dashcam footage of randomly selected vehicles. But with busy schedules and the dramatic ramp-up of miles, the checks were infrequent. (Also, Barentine says, it seemed like Tempe management’s regular checks, ride-alongs, and improvement plans for low performers fell away.) Vasquez’s supervisor later told investigators that he never reviewed videos of her on the job. The company didn’t check drivers in real time either, another supervisor says: “We didn’t want the operators thinking that we were just spying on them while they are trying to work.” Mostly, he added, they trusted the operators to police themselves.

But whoever does the policing, whether a supervisor or an operator, faces a Sisyphean battle against a well-documented phenomenon: something called automation complacency. When you automate any part of a task, the human overseer starts to trust that the machine has it handled and stops paying attention. Numerous industries have struggled to find ways to keep workers attentive in the face of this fact. In 2013, ­Google started its own self-driving pilot program, using employees to test cars on their commute. Told to watch the road and be ready to take over in case of emergency, the Googlers instead typed on their phones, curled their eyelashes, and slept while hurtling down the highway. ­Google ended the experiment in a matter of weeks, deciding that it must take humans completely out of the loop—only full automation would do. More recently, Tesla drivers using their vehicle’s Auto­pilot feature have been spotted sleeping while riding on highways and have been involved in a number of fatal crashes, including one in which a California driver had a strategy game active on his phone.

At Uber, operators say that staying focused on the job was easier in the early, “wild bronco” days, as one Pittsburgh worker put it, when the cars’ antics were frequent and dramatic. But with only one person in the car, and the machines getting better at navigating, it was easier to zone out. Between April 2017 and February 2018, according to records Uber later gave investigators, the company caught 18 operators breaking the phone policy. Uber gave nine of them additional training and fired the other nine, including Beltran.

“I understood why. That was our one major rule,” Beltran says. “I was devastated. It was one of the best jobs I ever had.” But both he and Caplinger told me their slipups were in part due to company policy: They would never have shot the darn pictures, they say, had a second person still been in the car.

On March 13, 2018—five days before the crash—Robbie Miller, an operations manager at Uber’s self-driving-truck division, sent an email to company executives and lawyers. In the message, later published by The Information, Miller complained that some drivers in the car division seemed poorly trained, damaging cars nearly every other day in February. He urged the company to put a second operator back in each car. He also wrote that cutting the fleet size dramatically “would significantly reduce ATG’s likelihood of being involved in an accident.”

Five days after Miller hit send, Vasquez pulled out of the Ghost Town garage to travel the Scotts­dale loop for her 72nd and 73rd—and final—time.

In her 39 minutes on the road that night, the car asked her to take over just once, for a few seconds.

One former Uber employee from Pittsburgh—who worked as a triage analyst, looking over incidents operators had flagged on the road—says he was baffled by the sheer number of loops the company racked up in its “crush miles” era. When the crash happened, he says, a friend from work grimly texted him. He recalls it reading, “It finally happened. We finally killed someone.”

“I CAN’T GIVE legal advice,” Officer Loehr told Vasquez, sitting in the victim services van after she asked whether she might need a lawyer. Authorities would reconstruct the crash, he explained, and that would determine if Vasquez had been at all negligent. “There’s a hypothetical possibility that it could go criminal,” he told her. “I don’t foresee it going that way.”

Even after Vasquez had listened to Loehr clip through her Miranda rights, heard him say that anything she said could be used against her in court, she kept talking: about her job and how the car had been working fine, about how she only saw Herzberg “right at impact.” She spoke as if she was comforted that someone was being kind and wanted to listen. He urged her to call a crisis response number for mental health services. “Don’t beat yourself up about it. What you went through is the definition of trauma.” As he wrapped up the interview, Loehr said, “You should breathe. You’re OK. Collisions happen.”

But the Tempe cops knew this wasn’t just another collision. And so did Uber: Immediately after the accident, the company grounded its self-driving car fleet across all its testing sites.

“You know as well as I know, this is going to be an international story,” an officer told a huddle of Uber reps at the scene. The police body cams were running, he said, and everything would be done “out in the open.” The car and all its recordings were now evidence; any attempt to alter them, he warned, would be a crime. On a more collegial note, the officer added that he needed Uber to be a partner in sleuthing. “We’re going to be working together throughout this whole process from now, probably for months.”

In the early morning hours, Vasquez retreated to Uber headquarters to calm down, and eventually drove home. The Volvo was towed to a police facility, and the cops nabbed warrants for the car’s data. Before dawn, they had taken custody of the SanDisk memory card from the camera mounted below the rearview mirror—the one that recorded both the car’s human pilot and a view of the road ahead.

Uber employees helped the cops find the right footage, which would go on to play a key role in the investigation: video of Vasquez in the driver’s seat as the car navigated the route; then of Vasquez gazing down toward her right knee, toward the lower console. Her glances downward averaged 2.56 seconds, but on one early loop, on the same stretch of road where the crash would take place, she looked down for more than 26 seconds. At times, the investigators thought she seemed to smirk. In the seconds before the car hit Herzberg, Vasquez looked down for about five seconds. Just before impact, she looked up, and gasped.

The media descended on the story the next day. Right away, experts were quoted lambasting Arizona’s lax regulatory environment, calling for a national moratorium on testing, and saying that fatalities are inevitable when developing such a technology.

Initially, Vasquez says, she was reassured by the police’s public stance. Tempe’s then police chief, Sylvia Moir, told the San Francisco Chronicle, “It’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or driven) based on how she came from the shadows right into the roadway.” Uber, she said, “would likely not be at fault,” though she wouldn’t rule out charges for the human pilot.

After that interview, Moir told me, emails that pulsed with “excruciating rage” deluged her inbox, accusing Moir of complicity in Tempe’s self-driving experiments and of blaming Herzberg for her own death. People were angry and wanted accountability. As the hours ticked by, reporters started digging up as many details as they could about Vasquez—including information about an 18-year-old felony for which she had served just under four years in prison.

By the end of the day, a search warrant had been issued for any cell phone Vasquez had with her in the Volvo “to determine if Rafaela was distracted.” Maybe that would show what she was so interested in down by her knee. The warrant also listed the crime now under investigation: vehicular manslaughter.

TWO NIGHTS AFTER the crash, a trio of police gathered outside room 227 at a Motel 6 in Tucson. Vasquez had checked in because, she says, reporters were thronging her apartment. The first days had set her reeling. “I knew everything happened; I just couldn’t believe it was happening. I was in shock.” Now as she greeted the cops, she seemed calm but slightly on edge; her attorney didn’t want her answering any questions, she told them. They were there to bag her phones into evidence. She initially told the officers that she’d only had her work phone with her in the car during the crash, but eventually handed over two LG phones—the one she used for work, with a black case, she explained to them, and her personal one, in a metallic case.

The next morning, the data that police extracted showed no calls made or texts sent in the minutes before the accident. Then, according to police reports, the cops homed in on the apps. Were videos playing at the time of the crash? Search warrants went to Netflix, Hulu, and YouTube.

The Tempe police were also weighing whether to make public the Volvo’s dashcam footage of the moments leading up to the crash. The Maricopa County attorney, Bill Montgomery, told them that releasing the video, which was in police custody at that point, could jeopardize their suspect’s right to fair legal proceedings. But Moir says the police were under “considerable” pressure from the public to do so, and they wanted to show there was nothing to hide; so the police tweeted the footage. Suddenly the world could see both Vasquez and Herzberg in the seconds before impact. Joe Guy, one operator in Tempe, gathered with others who’d come into Ghost Town, and they watched the video of Vasquez. “Most of us,” he says, “we went, ‘What the fuck was she looking at?’”

As the investigation ramped up, half a dozen Advanced Technologies Group personnel from other offices arrived in Tempe. At the police garage, cops stood by while the company downloaded the impounded car’s data so it could analyze what the system had done that night.

Three days after the crash, the visiting Uber leadership gathered at Ghost Town with Tempe police and federal investigators from the National Highway Traffic Safety Administration and the National Transportation Safety Board—the premier federal investigatory body for crashes. Because the software was proprietary, former NTSB chair Robert Sumwalt explained to me, everyone needed Uber to share its findings.

According to a police report of the meeting, Uber reps explained to the group that the company had overridden Volvo’s built-in automatic braking feature. Uber would later tell investigators this was because it interfered with the company’s own systems. The reps also presented their preliminary findings: While Uber’s own driving system recognized Herzberg, it didn’t do anything to avoid hitting her. That was Vasquez’s job, they said. She hadn’t taken the car out of autonomy until just before the moment of impact.

Vasquez wasn’t there to hear Uber’s assertion, but pretty quickly, she says, her supervisors’ interactions with her went from consoling to unnerving. One day, Vasquez says, she was told not to show up for the company’s movie night. “That’s when I really started getting nervous,” she says. Vasquez had asked her employer to pay for a criminal defense attorney, and Uber had agreed. Now her contact with fellow employees and work friends came to a halt.

Adding to the uncertainty, a week after the accident, Governor Ducey wrote to Uber CEO Dara Khosrowshahi with a newly stern tone: “My top priority is public safety,” he said. He found the dashcam footage “to be disturbing and alarming.” He was, he wrote, suspending Uber’s ability to test its cars in the state.

TEN DAYS AFTER the accident, Uber agreed to pay out a settlement for Elaine Herz­berg’s husband and her daughter Christine Wood, who says it was in the low millions. Wood too had no home and had been camping near the crash site.

Wood says that Herzberg, who’d served stints in county jail on drug charges, had tried to shield her children from her struggles with controlled substances. “She wasn’t proud of it, and she did what she could to make sure me and my brother stayed away from it,” Wood says. She says she and her mom had often jaywalked where the accident happened, sometimes to charge their phones at an electrical plug in the median. (The city has since filled in the median’s footpaths and added more no-crossing signs to the area.) When she died, Herzberg had methamphetamine in her blood.

With the settlement money, Wood and Herz­berg’s husband bought a ranch house in Mesa. “It got me off the streets, which is what she would have wanted me to do,” she says. Months later, Uber also settled with Herzberg’s parents and son, says Herzberg’s mom, Sharon Daly. “I didn’t want to cash the damn check because it would make it final,” Daly told me over the phone, starting to weep. “And I wanted her to come back.”

While Uber stanched its civil liability, investigators kept pushing for new details. By mid-April, Vasquez was sitting for three hours—with her Uber-paid lawyer and Uber’s own attorney—talking to investigators from the National Transportation Safety Board. According to the agency’s record of the talk, she told them that, at work that night, she had stowed her personal phone in her purse behind her. Her work phone was on the passenger seat. She said she had been monitoring the Uber tablet that was mounted on the center console, then looked up and saw Herzberg.

Then Tempe police started to receive information from the warrants to the streaming apps. YouTube and Netflix found no activity in the hours around the collision. But in late May, Hulu’s legal team reported that after 9:16 pm, Vasquez’s personal phone began streaming the talent show The Voice. The episode stopped at 9:59. The crash happened at 9:58.

About a month later, the police released hundreds of pages of investigative documents to the press—including the seemingly damning report from Hulu. The police analysis found that, if she’d been looking at the road, Vasquez could have stopped more than 42 feet in front of Herzberg. They deemed the crash “entirely avoidable.”

And like that, the media focus shifted from Uber to Vasquez, sometimes in cartoonishly villainous terms. (A Daily Mail headline: “Convicted Felon Behind the Wheel of Uber Self-Driving Car Was Streaming The Voice on Her Phone and Laughing Before Crash Which Killed a Pedestrian in Arizona.”) Vasquez set a Google alert on her name and then couldn’t stop reading every comment, including insults about her looks and being trans. “I spiraled,” she recalls. “Now I’m hearing things that I haven’t heard since high school.” Offended and hurt, she wondered what her gender identity had to do with the crash, and she shut down her social media accounts.

For months, Vasquez waited to see what the Maricopa County attorney would do. A charge of vehicular manslaughter could mean years in prison—and the return of a familiar pattern in her life, a pattern of momentum turning against her.

RAFAELA VASQUEZ WAS born in suburban Maryland. Her mother died of a heart attack when she was just 3, so she was raised by her dad. He was born in Puerto Rico but moved to New York City in his early teens. He was hired at IBM and worked his way up to become a manager. The family followed his job, shifting through Georgia, Maryland, Arizona, and Virginia.

In the 1980s, when she was in grade school and junior high, her father brought home his company’s debut PCs, seeding her love of gadgets as she spent hours engrossed in Pong. But her dad was a strict Catholic and a former Marine sergeant who served in Vietnam, and he bristled at his child’s femininity. He tried any number of interventions, Vasquez says, “to pray the gay and military the gay away”—Catholic grade school, a soccer team, a military school for fifth grade called Linton Hall School. She was bullied all the way. “I just didn’t know what I was, I didn’t have anybody to talk to,” she says. She took solace in visiting her Aunt Janice, from her mom’s family of Black Southern Baptists. “Even though I know she didn’t approve of me, she never treated me any different and still loved me.”

Vasquez says she was sexually abused as a child—by two priests, a coach, and a therapist. “I thought it was me and there was something wrong with me, because every time we moved, I thought, ‘OK, it’s not gonna happen.’ But it did; I was always very alone. I never had friends … I looked like the type of person that keeps a secret.” Vasquez says she first attempted suicide in third grade.

When she was in junior high, the family moved to Tucson and, she says, the sexual abuse finally stopped. But she still didn’t have a word for how she felt; she’d seen the “transsexuals” on tabloid TV, eroticized in a way Vasquez didn’t identify with. Gay didn’t seem to describe it either. She took refuge in AOL chat­ rooms, where she could talk to people who didn’t know her in real life. Then she found an electronic dance club in town called the Fineline, where she first met transgender friends.

In high school, Vasquez worked up to a full face of goth makeup—which also helped her conceal the bruises from getting beaten up by boys. Her hair was short in those days, but Vasquez stopped correcting people when they called her “she.” She also began taking Premarin estrogen pills she bought off her trans friends for $2 a pop. “I didn’t know that it was called transitioning. All I knew is that I felt better.”

After graduation, Vasquez floated through a series of jobs, community college courses, classes at the University of Arizona. In her mid-twenties she met a guy at a rave in Phoenix. Josh, who she considered her first boyfriend, was six years her junior. By mid-2000, he was also on probation for stealing a car, and Vasquez was on probation for falsifying an unemployment claim. At the time, she was managing a Blockbuster video store in Scottsdale. One morning, she and a coworker drove to the bank to deposit $2,783 from the store’s cash register into Blockbuster’s corporate account. Vasquez’s boyfriend rushed up to the car, pointing a handgun at them, according to a police report, and she handed over the cash. Yet a month later, police arrested Vasquez. Informants had told police that she had been in on the heist. In an interrogation, her boyfriend said the same.

While Vasquez flatly denied involvement to the police, her bail was set at $70,800. She couldn’t afford that, so she remained in the Maricopa County jail for five months, housed with male inmates. Vasquez says she was sexually assaulted by both inmates and guards, but other than telling her aunt about it, she didn’t officially report the abuse. “I’d never had to have anal stitching before, but I had it in jail.”

She pleaded guilty to attempted armed robbery, and the judge sentenced her to five years in prison. Her ex-boyfriend, who’d held the gun during the stickup and pleaded guilty to armed robbery—a more serious felony—was sentenced to four.

In prison, Vasquez was housed with men, wasn’t allowed to take hormones, and says she again was regularly sexually assaulted. While there, she penned a letter to her dad—“62 pages front and back”—explaining that her gender identity wasn’t going away. The letter helped begin to repair their frayed relationship, and he came to visit her regularly and tried to start calling her by her preferred name. In the final year of her sentence, she was transferred to a low-security prison yard where she was able to socialize with other transgender inmates. They taught her to mix commissary goods with Vaseline or hair grease for makeup: Atomic Fireballs for lip gloss, a golf pencil for eyeliner, Kool-Aid for eyeshadow. She says the inmates tasked her with brewing contraband hooch—out of water, sugar packets, bread, oranges, and Jolly Ranchers candy.

Shambling back to the Phoenix area in 2004, when she was 30, Vasquez soon moved in with a friend, drew disability checks for her languishing mental health, and dove into therapy. “Prison really messed me up,” she says. “And it took me a long time to recover.” She eventually sought out jobs that let her work from home—taking tech-support calls for Cricket Wireless and Dell. Through contractors, she was hired for a string of remote jobs tweeting live commentary for The Bachelorette and Dancing With the Stars and moderating flagged content on Facebook. She signed up as a volunteer beta tester, exchanging avid feedback for free Plantronics headsets and iPads. “People fear robots are going to take over the world and our jobs? I wanted them to,” she says. “I like robots.” For one job, she wore a shirt with a camera embedded in a button and posed as a prospective tenant to surveil workers at leasing offices.

When she was working consistently, she’d bring in more than $40,000 a year—enough to rent a house and support her rescue pit bulls, Sweetie and Romeo, and later Tyson. She found Tyson in a dumpster, when he was a puppy, sealed in a plastic bag. She related to pit bulls. “They’re so misunderstood,” she says. She knew what it was like to be judged by appearances, to have people be intimidated by her. “I think I have an RBF: resting bitch face,” she says. “I get asked, ‘Are you mad?’ No.” The dogs became her main companions, hunkering down with her at home as her reclusiveness veered into agoraphobia. When her Aunt Janice died, her emotional state plunged further. At times, she’d leave the house only for late-night grocery runs for kibble or moonlit dog walks. “If I wouldn’t have had dogs,” she says, “I would have just let myself waste away.”

In 2015, Vasquez decided she needed to force herself out among people. So she signed up to drive for Lyft and Uber. She answered truthfully during Uber’s onboarding: no felonies in the past seven years. And she felt OK picking up strangers a few nights a week. It was her car, after all, and she could stop working if she got too anxious, or she could call the cops if a drunk wouldn’t leave. Her agoraphobia began to ease as she chatted with the strangers who slid into the back seat.

After a couple of years, Vasquez spotted an ad for Uber’s self-driving unit. “I aced the test,” she says. In summer 2017, she flew to Pittsburgh for a weeklong boot camp. The self-driving Volvos were set up on a training track, and Vasquez learned to hover her hands around the wheel and her foot over the brake while the car drove itself. Touching either would take over control, which she had to do swiftly, as the trainers programmed the car to make mistakes. Recruits who erred were weeded out over the week; Vasquez made the cut and flew back to Tempe for more mentoring, working up to testing the cars on public roads. “It felt refreshing to me,” she says. “It felt like I was starting over again.”

IN THE WEEKS after Herzberg’s death, Uber’s Advanced Technologies Group held a number of all-hands meetings. The triage analyst remembers that CEO Eric Meyhofer had puffy eyes—“like he was not sleeping, and crying.” Meyhofer and other leaders said they were cooperating with the police and federal investigators. They told staff that they weren’t going to let the internal investigation turn into a blame game and would make no assumptions about Vasquez. Leadership told employees they could take time off or visit on-hand grief counselors.

With the cars off the road, the company also dove into its own technical soul-searching—doing a self-assessment about the crash and Uber’s safety practices and creating a panel of safety advisers, including a former administrator of the NHTSA.

The excavation of facts was unflattering: Uber told the NTSB that its tech had never identified Herzberg as a person. Nearly every time the system changed what it thought Herzberg was—a car, a bike, other—it started from scratch in calculating where the object might be headed, that is, across the road into the Volvo’s lane. Uber had programmed the car to delay hard braking for one second to allow the system to verify the emergency—and avoid false alarms—and for the human to take over. The system would brake hard only if it could entirely avoid the crash, otherwise it would slow down gradually and warn the operator. In other words, by the time it deemed it couldn’t entirely avoid Herzberg that night, the car didn’t slam on the brakes, which might have made the impact less severe. Volvo ran its own tests after the crash, it told the NTSB, and found that its automatic braking system, the one Uber overrode for its own system, would have prevented the crash in 17 out of 20 scenarios and would have reduced the speed of impact in the other three.

“Prior to this crash, I think there was a lot of recognition among industry that ‘There but for the grace of God go I. We’re trying to be responsible, but something could happen,’” says Bryant Walker Smith, the self-driving scholar. “When the crash happened, it turned from this ‘Oh, my goodness, this could happen to anybody’ to ‘Well, yeah, of course, it was Uber.’”

As more information was released, Uber staffers were becoming increasingly frustrated by the company’s leaders. “People were blunt about it being a massive fuckup, and there being moral culpability, and that the company needed to change,” Barentine says. The triage analyst wondered if he was “implicitly involved in this. Is this blood on my hands?”

“These are people who came to work here because of the promise of self-driving and this utopian future,” a Pittsburgh-based manager told me. “It was a pretty big body blow, that they felt like they contributed to something so severe.”

The company threw out its plan to put a driverless taxi into service by the end of 2018; the new target was 2022. “Everything was defined by that event,” the manager says of the crash. “It put us in a really stark view of what the car was actually capable of doing. It was nowhere near what the public perception was.”

One morning in late May, the nearly 300 benched Tempe employees were told to report to Ghost Town. When the supervisor arrived for the meeting, he saw that senior staff had flown in. “I was like, ‘Oooooohh crap.’” Austin Geidt, an operations head who would ring the stock exchange bell when Uber went public a year later, addressed the group: Arizona had rejected their proposals to stay. Everyone was laid off.

Employees received two months of pay under state law, and another two months in an Uber separation package with a non-disclosure agreement. (Uber says Vasquez received a severance package too, in 2018, but would not say how much.) After the announcement, people met with HR reps on hand, said goodbyes in shock. The supervisor recalls an opera-singing operator bellowing strains of “Ave Maria” to the dwindling ranks as a coda.

Now some of Vasquez’s fellow operators pointed the finger squarely at her. Many of the nine operators who talked to me accepted—and took pride in—their role in preventing crashes. I asked Beltran, who’d been fired for looking at his phone: Wasn’t his own lapse just a degree or two removed from Vasquez gazing at her phone for several seconds at a time? “No, no, no, no, no,” he told me. “That’s like going above and beyond not doing your job.”

That summer, the Advanced Technologies Group also laid off 100 operators in Pittsburgh and San Francisco and ended its self-driving-truck program. The hundreds of remaining staff would focus on cars in a new era that was hyper-focused on improving safety.

At some point, court documents show, a technical program manager phoned police detective Thomas Haubold, who was leading the Tempe investigation. In a 48-minute recorded conversation, the caller said he was worried Vasquez was going to take too much of the blame and that a larger problem would be obscured: that in its quest to get as many cars on the road as quickly as possible, Uber had ignored risks. He told Haubold not to trust Uber to be totally forthcoming. The company, the insider said, was “very clever about liability as opposed to being smart about responsibility.”

The call seemed to make little impact. A year after the crash, an Arizona prosecutor announced that the state would not criminally charge Uber in the fatality. The next month, the Advanced Technologies Group received a $1 billion investment from SoftBank, Denso, and Toyota, valuing the division at $7.25 billion, three weeks before Uber’s IPO.

All along, some employees were surprised that no leaders had been fired because of Herz­berg’s death or in the disarray that followed. Now, with criminal charges off the table for Uber, Vasquez sat in legal purgatory alone.

STANDING IN LINE at Chipotle one day, Vasquez remembers hearing a voice: Is that the person who killed that lady? Vasquez made a beeline for the door. After photos of her face and the video of her in the Volvo circulated in the media, Vasquez tried to make herself invisible. She kept her hair straight to avoid drawing attention with the braids of multicolor yarn she used to love wearing. When she had to go to the grocery store, she would calm her nerves in the parking lot, then dash in—or simply pick things up curbside. When Covid hit, she felt utterly relieved to put on a mask.

Laid off alongside the other Tempe staff, Vasquez tried to stretch her savings—along with, eventually, disability payments for her mental health—as best she could. At one point, Vasquez applied for a job at Taco Bell to test her prospects. Try back after your legal issues have settled down, she says she was told. It’s bad publicity. Lying low from the media and anyone who might recognize her, she says she lived for months with her dogs in a string of Motel 6’s.

Vasquez eventually moved to Tucson, where she cared for her father, who was being treated for cancer. With dwindling money and limited space, she had to give up her dogs, another blow as isolation set in. “Before, I chose to be alone,” she says. “This time, I felt as if I was alone because nobody wanted to be around me.” Most distressingly, she stopped the therapy that had helped her for years, wanting to avoid any risk that her therapist would get subpoenaed. When friends asked her about what they’d read about the case, Vasquez would tell them, “It’s not true.” But she couldn’t elaborate.

In November 2019, a year and a half after the crash, the NTSB released its final report. The 78-page document didn’t carry legal heft; it was aimed at preventing future accidents. But it called out what it said was the probable cause of the crash: Vasquez was distracted by her “personal cell phone.” The report also called her distraction a “typical effect of automation complacency”—and said that Vasquez was far from the only contributor to the accident. The board’s findings also targeted federal and state agencies’ lax regulations and—the focus of much of the report—Uber’s “inadequate safety culture.” In an NTSB board meeting, vice chair Bruce Landsberg said, “There’s enough responsibility to go around here on all sides.” NTSB chair Robert Sumwalt focused on Uber: “The collision was the last link of a long chain of actions and decisions made by an organization that unfortunately did not make safety the top priority.” Still, NTSB investigator David Pereira praised Uber’s cooperation and its post-crash safety changes to ward off further incidents.

Shortly thereafter, the state of Arizona, fighting a negligence lawsuit from Herzberg’s daughter in civil court, also partially blamed Uber, alleging the company was “vicariously liable” for its employee. (The case was dismissed.)

As for criminal charges, nearly a year after that report, in August 2020, Maricopa County prosecutors brought their evidence against Vasquez before a grand jury. And that’s how Rafaela Vasquez—and only Rafaela Vasquez—was indicted for allegedly causing the first pedestrian death by a self-­driving car.

THE CHARGE WAS negligent homicide with a dangerous instrument. She faced four to eight years in prison if convicted. When she heard the news, Vasquez curled into the fetal position on her father’s bedroom floor, struggling to breathe as he tried to calm her. “It was a nightmare,” she says. “I was just devastated, beyond devastated by it.”

......

‘I’m the Operator’: The Aftermath of a Self-Driving Tragedy