Incident 72: Facebook translates 'good morning' into 'attack them', leading to arrest

Description: Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Stats

Incident ID
72
Report Count
26
Incident Date
2017-10-17
Editors
Sean McGregor, Khoa Lam

CSET Taxonomy Classifications

Taxonomy Details

Full Description

Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel. The post was not read by any Arabic-speaking officers before the arrest was made. The man was posted the words along with a picture of him leaning on a bulldozer, which are sometimes used in terrorist attacks, therefore the conclusion made he was inciting violence. Facebook's automatic language translation software can translate 40 languages in 1,800 directions, and posts the translation instead of the original when confident the translation is correct.

Short Description

Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel.

Severity

Moderate

Harm Type

Psychological harm, Harm to civil liberties

AI System Description

Facebook's automatic language translation software that can translate 40 languages in 1,800 directions

System Developer

Facebook

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition, Action

AI Techniques

natural language processing

AI Applications

language translation

Location

Beitar Illit, Israel

Named Entities

Israeli Police, Facebook, Beitar Illit, Israel

Technology Purveyor

Facebook

Beginning Date

2017-10-15

Ending Date

2017-10-15

Near Miss

Harm caused

Intent

Accident

Lives Lost

No

Data Inputs

User posts/input

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Translation

Known AI Technology

Convolutional Neural Network, Recurrent Neural Network, Distributional Learning

Potential AI Technology

Intermediate modeling, Classification, Multimodal Learning, Image Classification

Known AI Technical Failure

Dataset Imbalance, Distributional Bias

Potential AI Technical Failure

Generalization Failure

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents