Incident 17: Inappropriate Gmail Smart Reply Suggestions

Description: Google's Gmail Smart Reply tool was over-recommending the response "I love you" in situations where it was deemed innappropriate.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: Google developed and deployed an AI system, which harmed Gmail Users.

Incident Stats

Incident ID
17
Report Count
22
Incident Date
2015-11-03
Editors
Sean McGregor

CSET Taxonomy Classifications

Taxonomy Details

Full Description

Google's Gmail Smart Reply tool suggests replies, each of about three to six words long, to emails received on its platform. According to Google's management director Alex Gawley, the machine learning-based system is able to produce about 20,000 discrete responses to emails by combining suggestions from two machine learning programs. The first reads the email and comprehends the content and the second suggests responses to that content all taking place on the long short-term memory neural network. Gmail users in 2015 provided feedback that the reply "I love you" was occurring too often, leading Google to change their algorithm.

Short Description

Google's Gmail Smart Reply tool was over-recommending the response "I love you" in situations where it was deemed innappropriate.

Severity

Negligible

AI System Description

Google's Gmail Smart Reply tool, a machine-learning system is able to produce approximately 20,000 discrete responses by combining suggestions from two machine- learning programs: the first reads and comprehends the content and the second recommends responses.

System Developer

Google

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition

AI Techniques

machine-learning, natural language processing, long short-term memory neural network

AI Applications

content creation, smart reply, suggested reply, recommendation engine

Location

Global

Named Entities

Google, Gmail, Gmail Smart Reply

Technology Purveyor

Google

Beginning Date

2015-01-01T00:00:00.000Z

Ending Date

2018-01-01T00:00:00.000Z

Near Miss

Unclear/unknown

Intent

Accident

Lives Lost

No

Data Inputs

email text

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents