Incident 128: Tesla Sedan on Autopilot Reportedly Drove Over Dividing Curb in Washington, Resulting in Minor Vehicle Damage

Description: A Tesla Sedan operating on Autopilot mode was not able to center itself on the road and drove over a yellow dividing curb in Redmond, Washington, causing minor damage to the vehicle’s rear suspension.
Alleged: Tesla developed and deployed an AI system, which harmed Eric Horvitz and Tesla drivers.

Suggested citation format

Horvitz, Eric. (2017-08-01) Incident Number 128. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
128
Report Count
2
Incident Date
2017-08-01
Editors
Sean McGregor, Khoa Lam

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

One day last summer, Microsoft’s director of artificial intelligence research, Eric Horvitz, activated the Autopilot function of his Tesla sedan. The car steered itself down a curving road near Microsoft’s campus in Redmond, Washington, freeing his mind to better focus on a call with a nonprofit he had cofounded around the ethics and governance of AI. Then, he says, Tesla’s algorithms let him down.

“The car didn’t center itself exactly right,” Horvitz recalls. Both tires on the driver’s side of the vehicle nicked a raised yellow curb marking the center line, and shredded. Horvitz had to grab the wheel to pull his crippled car back into the lane. He was unharmed, but the vehicle left the scene on the back of a truck, with its rear suspension damaged. Its driver left affirmed in his belief that companies deploying AI must consider new ethical and safety challenges. Tesla says Autopilot is intended for use by a fully attentive driver.

At Microsoft, Horvitz helped establish an internal ethics board in 2016 to help the company navigate potentially tricky spots with its own AI technology. The group is cosponsored by Microsoft’s president and most senior lawyer, Brad Smith. It has prompted the company to refuse business from corporate customers, and to attach conditions to some deals limiting the use of its technology.

Horvitz declined to provide details of those incidents, saying only that they typically involved companies asking Microsoft to build custom AI projects. The group has also trained Microsoft sales teams on applications of AI the company is wary of. And it helped Microsoft improve a cloud service for analyzing faces that a research paper revealed was much less accurate for black women than white men. “It's been heartening to see the engagement by the company and how seriously the questions are being taken,” Horvitz says. He likens what’s happening at Microsoft to an earlier awakening about computer security—saying it too will change how every engineer works on technology.

Many people are now talking about the ethical challenges raised by AI, as the technology extends into more corners of life. French President Emmanuel Macron recently told WIRED that his national plan to boost AI development would consider setting “ethical and philosophical boundaries.” New research institutes, industry groups, and philanthropic programs have sprung up.

Microsoft is among the smaller number of companies building formal ethics processes. Even some companies racing to reap profits from AI have become worried about moving too quickly. “For the past few years I’ve been obsessed with making sure that everyone can use it a thousand times faster,” says Joaquin Candela, Facebook’s director of applied machine learning. But as more teams inside Facebook use the tools, “I started to become very conscious about our potential blind spots.”

At Facebook’s annual developer conference this month, data scientist Isabel Kloumann described a kind of automatic adviser for the company’s engineers called Fairness Flow. It measures how machine-learning software analyzing data performs on different categories—say men and women, or people in different countries—to help expose potential biases. Research has shown that machine-learning models can pick up or even amplify biases against certain groups, such as women or Mexicans, when trained on images or text collected online.

Kloumann’s first users were engineers creating a Facebook feature where businesses post recruitment ads. Fairness Flow’s feedback helped them choose job recommendation algorithms that worked better for different kinds of people, she says. She is now working on building Fairness Flow and similar tools into the machine-learning platform used company-wide. Some data scientists perform similar checks manually; making it easier should make the practice more widespread. “Let's make sure before launching these algorithms that they don't have a disparate impact on people,” Kloumann says. A Facebook spokesperson said the company has no plans for ethics boards or guidelines on AI ethics.

Google, another leader in AI research and deployment, has recently become a case study in what can happen when a company doesn’t seem to adequately consider the ethics of AI.

Last week, the company promised that it would require a new, hyperrealistic form of its voice assistant to identify itself as a bot when speaking with humans on the phone. The pledge came two days after CEO Sundar Pichai played impressive—and to some troubling—audio clips in which the experimental software made restaurant reservations with unsuspecting staff.

Google has had previous problems with ethically questionable algorithms. The company’s photo-organizing service is programmed not to tag photos with “monkey” or “chimp” after a 2015 incident in which images of black people were tagged with “gorilla.” Pichai is also fighting internal and external critics of a Pentagon AI contract, in which Google is helping create machine-learning software that can make sense of drone surveillance video. Thousands of employees have signed a letter protesting the project; top AI researchers at the company have tweeted their displeasure; and Gizmodo reported Monday that some employees have resigned.

A Google spokesperson said the company welcomed feedback on the automated-call software—known as Duplex—as it is refined into a product, and that Google is engaging in a broad internal discussion about military uses of machine learning. The company has had researchers working on ethics and fairness in AI for some time but did not previously have formal rules for appropriate uses of AI. That’s starting to change. In response to scrutiny of its Pentagon project, Google is working on a set of principles that will guide use of its technology.

Some observers are skeptical that corporate efforts to imbue ethics into AI will make a difference. Last month, Axon, manufacturer of the Taser, announced an ethics board of external experts to review ideas such as using AI in policing products like body cameras. The board will meet quarterly, publish one or more reports a year, and includes a member designated as a point of contact for Axon employees concerned about specific work.

Soon after, more than 40 academic, civil rights, and community groups criticized the effort in an open letter. Their accusations included that Axon had omitted representatives from the heavily policed communities most likely to suffer the downsides of new police technology. Axon says it is now looking at having the board take input from a wider range of people. Board member Tracy Kosa, who works on security at Google and is an adjunct professor at Stanford, doesn’t see the episode as a setback. “I’m frankly thrilled about it,” she says, speaking independently of her role at Google. More people engaging critically with the ethical dimensions of AI is what will help companies get it right, Kosa says.

None have got it right so far, says Wendell Wallach, a scholar at Yale University's Interdisciplinary Center for Bioethics. “There aren’t any good examples yet,” he says when asked about the early corporate experiments with AI ethics boards and other processes. “There’s a lot of high-falutin talk but everything I’ve seen so far is naive in execution.”

Wallach says that purely internal processes, like Microsoft’s, are hard to trust, particularly when they are opaque to outsiders and don’t have an independent channel to a company’s board of directors. He urges companies to hire AI ethics officers and establish review boards but argues external governance such as national and international regulations, agreements, or standards will also be needed.

Horvitz came to a similar conclusion after his driving mishap. He wanted to report the details of the incident to help Tesla’s engineers. When recounting his call to Tesla, he describes the operator as more interested in establishing the limits of the automaker’s liability. Because Horvitz wasn’t using Autopilot as recommended—he was driving slower than 45 miles per hour—the incident was on him.

“I get that,” says Horvitz, who still loves his Tesla and its Autopilot feature. But he also thought his accident illustrated how companies pushing people to rely on AI might offer, or be required, to do more. “If I had a nasty rash or problems breathing after taking medication, there'd be a report to the FDA,” says Horvitz, an MD as well as computer science PhD. “I felt that that kind of thing should or could have been in place.” NHTSA requires automakers to report some defects in vehicles and parts; Horvitz imagines a formal reporting system fed directly with data from autonomous vehicles. A Tesla spokesperson said the company collects and analyzes safety and crash data from its vehicles, and that owners can use voice commands to provide additional feedback.

Liesl Yearsley, who sold a chatbot startup to IBM in 2014, says the embryonic corporate AI ethics movement needs to mature fast. She recalls being alarmed to see how her bots could delight customers such as banks and media companies by manipulating young people to take on more debt, or spend hours chatting to a piece of software.

The experience convinced Yearsley to make her new AI assistant startup, Akin, a public benefit corporation. AI will improve life for many people, she says. But companies seeking to profit by employing smart software will inevitably be pushed towards risky ground—by a force she says is only getting stronger. “It’s going to get worse as the technology gets better,” Yearsley says.

Tech Firms Move to Put Ethical Guard Rails Around AI

In the summer of 2017, Eric Horvitz turned on the Autopilot function of his Tesla sedan. Not having to worry about steering the car along a curving road in Redmond, Washington, allowed Horvitz to better focus on the call he was taking with a nonprofit he had cofounded. The topic of the call? The ethics and governance of AI.

That’s when Tesla’s AI let him down.

Both driver’s side tires bumped up against a raised yellow curb in the middle of the road, shredding the tires instantly and forcing Horvitz to quickly reclaim control of the vehicle. Because the car hadn’t centered itself properly, Horvitz found himself standing on the sidewalk, watching a truck tow his Tesla away.

But what was mostly on his mind was the fact that companies utilizing AI needed to consider new ethical and safety challenges. And Horvitz isn’t alone. New think tanks, industry groups, research institutes, and philanthropic organizations have emerged, all concerned with setting ethical boundaries around AI.

The automotive sector is rife with ethical conundrums. Consider these (and realize they are only the tip of the iceberg):

When programming how a car should respond to an impending crash, whose interests should most be considered — the driver of the car, the driver of the approaching car, or the insurance companies of either? Should the AI of an automobile be programmed to minimize loss of life, even if that means sacrificing its own driver to save multiple lives in another vehicle? How does the preservation of private or public property come into play when programming a car to avoid an accident?

As you can see, the ethical situations AI poses go on and on. And this doesn’t even begin to consider AI being used in online ad algorithms, the tagging of online photos, and the relatively new field of private drones.

Horvitz realized rather quickly after calling Tesla to report the accident that the company was much more concerned about liability issues than they were with solving any deep, ethical quandaries around their use of AI.

“I get that,” says Horvitz, whose love for his Tesla didn’t diminish. “If I had a nasty rash or problems breathing after taking medication, there’d be a report to the FDA…I felt that that kind of thing should or could have been in place.”

These are the sorts of questions we will all have to answer — what will the ethical parameters be in corporate use of Artificial Intelligence, and who will set the standards as we move into this new world?

Of course, these are only examples of AI questions in instances where it is being used legally — the big threats ahead on cyber security will be the use of artificial intelligence algorithm for illegal purposes. In the same way that AI can be used for good purposes, it can also be used for cybercrime to analyze the behavior of certain people, anticipate their next move, and attack them when they are least suspecting the attack.

Current and future generations view cybersecurity in completely new ways, understanding they were born into a complex world, and tending to have better developed instincts than previous generations.

They show less risk of being compromised by traditional methods. Yet, even Millennials have shown themselves and their personal data to be overly exposed, as has been demonstrated by the latest events of social media companies abusing their data.

Twenty years ago, the question we wrestled with was whether complete interconnectedness in an everything-is-online world would be worth the resulting loss of privacy.

Since then, we answered that question with a resounding yes, connecting anything and everything to the Internet, including ourselves through our phones and other smart devices, introducing a brand-new, overarching question:

In a world where everything is connected, how can we help the individual maintain privacy, security, and autonomy?

As recent social media practices have shown, individuals and organizations will sell data, even private data, when large profits hang in the balance. We have to return to a stronger concern for privacy, using the identification of people as a way to protect them against potential abuse or personal data loss. We must set up an ethical hedge that will keep AI from illegally exploiting those who engage with it.

This endeavor sets up the major questions we have to answer:

  1. How will an ethical security hedge be created to keep AI from exploiting us?
  2. Who should be responsible for this endeavor?
  3. Is this a government mandate?
  4. Or is the effort left to the private sector?

The answers are likely somewhere in between. Or better said, they’re found in a collaboration between public, private, and government sectors. Are we willing to begin that collaboration now, and continue it until we have a satisfactory solution?

These are the questions that must be answered if today’s cybersecurity is going to provide us any legitimate sense of safety and comfort, allowing us to thrive in our an online and hyperconnected world, and protect those basic human values and rights that we cannot live without #transhumancode extract

The Ethics of Artificial Intelligence transcript from the #transhumancode bestseller

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents