Incident 34: Amazon Alexa Responding to Environmental Inputs

Description: There are multiple reports of Amazon Alexa products (Echo, Echo Dot) reacting and acting upon unintended stimulus, usually from television commercials or news reporter's voices.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: Amazon developed and deployed an AI system, which harmed Alexa Device Owners.

Incident Stats

Incident ID
34
Report Count
35
Incident Date
2015-12-05
Editors
Sean McGregor

CSET Taxonomy Classifications

Taxonomy Details

Full Description

There are multiple reports of Amazon Alexa products (Echo, Echo Dot) reacting and acting upon unintended stimulus, usually from television commercials or news reporter's voices. In one case, a 6-year-old girl asked her Alexa Echo Dot to "play doll house with me and get me a doll house." The Alexa ordered a $150-170 dollhouse and four pounds of sugar cookies. When news reporters began covering this event, reports surfaced of the news anchor's voices triggering more Amazon Alexa products to order dollhouses. Other instances include a Superbowl advertisement that caused Amazon Alexa's to begin playing whale sounds, turn on/off hall lights, and order cat food delivered to the home.

Short Description

There are multiple reports of Amazon Alexa products (Echo, Echo Dot) reacting and acting upon unintended stimulus, usually from television commercials or news reporter's voices.

Severity

Negligible

Harm Type

Financial harm

AI System Description

Amazon Alexa, a smart speaker that can recognize speech and be used to buy products from Amazon Marketplace

System Developer

Amazon

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition, Action

AI Techniques

Amazon Alexa, natural language processing, virtual assistant, language recognition

AI Applications

voice recognition, natural language processing

Named Entities

Amazon, San Diego TV

Technology Purveyor

Amazon

Beginning Date

2018-01-01T00:00:00.000Z

Ending Date

2018-01-01T00:00:00.000Z

Near Miss

Harm caused

Intent

Accident

Lives Lost

No

Infrastructure Sectors

Information technology

Data Inputs

environment audio, Alexa software

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

AI Voice Assistant

Known AI Technology

Automatic Speech Recognition, Language Modeling, Acoustic Fingerprint

Known AI Technical Failure

Unsafe Exposure or Access, Misuse

Potential AI Technical Failure

Unauthorized Data, Inadequate Anonymization, Context Misidentification, Lack of Capability Control, Underspecification

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents