Incident 46: Nest Smoke Alarm Erroneously Stops Alarming

Description: In testing, Google Nest engineers demonstrated that the Nest Wave feature of their Nest Protect: Smoke + CO Alarm could inadvertently silence genuine alarms.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: Nest Labs developed and deployed an AI system, which harmed Fire Victims.

Incident Stats

Incident ID
46
Report Count
6
Incident Date
2014-01-21
Editors
Sean McGregor

CSET Taxonomy Classifications

Taxonomy Details

Full Description

On May 21, 2014, Google Nest, producer of smart home products, issued a recall for its Nest Protect: Smoke + CO Alarm due to concerns that the Nest Wave feature could inadvertently silence alarms. The Nest Wave feature is designed to allow users to silence spurious alarms, for example while cooking, by waving a hand near the unit that triggered the alarm. In lab conditions, however, Nest engineers demonstrated that the Wave feature could be activated erroneously, raising the potential that the device could silence genuine alarms.

Short Description

In testing, Google Nest engineers demonstrated that the Nest Wave feature of their Nest Protect: Smoke + CO Alarm could inadvertently silence genuine alarms.

Severity

Negligible

AI System Description

The Nest Wave gesture detection function of the Nest Protect: Smoke + CO Alarm. The feature is designed to allow the user to silence spurious alarms with a gesture near the device.

System Developer

Nest Labs

Sector of Deployment

Activities of households as employers

Relevant AI functions

Unclear

AI Techniques

Unclear

AI Applications

gesture detection

Location

United States

Named Entities

Nest Labs, Google Nest, Google

Technology Purveyor

Google Nest

Beginning Date

2013-11-15

Ending Date

2014-05-21

Near Miss

Near miss

Intent

Accident

Lives Lost

No

Data Inputs

Motion sensor data

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Substance Detection, Smart Devices

Known AI Technology

Gesture Recognition

Potential AI Technology

Regression

Known AI Technical Failure

Unsafe Exposure or Access

Potential AI Technical Failure

Hardware Failure, Limited User Access, Underfitting, Generalization Failure

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents