Entidades

Facebook

Incidentes involucrados como desarrollador e implementador

Incidente 7226 Reportes
Facebook translates 'good morning' into 'attack them', leading to arrest

2017-10-17

Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel.

Más

Incidente 4718 Reportes
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia

2019-06-22

Facebook allegedly did not adequately remove hate speech, some of which was extremely violent and dehumanizing, on its platform including through automated means, contributing to the violence faced by ethnic communities in Ethiopia.

Más

Incidente 2846 Reportes
Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship

2018-05-01

Facebook’s removal of posts featuring renowned artworks by many historical artists and their promotional content due to nudity via both automated and human-moderated means were condemned by critics, such as museums and tourism boards, as cultural censorship and prevention of artwork promotion.

Más

Incidente 1695 Reportes
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

2018-08-15

Facebook allegedly did not adequately remove anti-Rohingya hate speech, some of which was extremely violent and dehumanizing, on its platform, contributing to the violence faced by Rohingya communities in Myanmar.

Más

Afectado por Incidentes

Incidente 3994 Reportes
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

2022-11-15

Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.

Más

Incidents involved as Deployer

Incidente 4693 Reportes
Automated Adult Content Detection Tools Showed Bias against Women Bodies

2006-02-25

Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.

Más

Entidades Relacionadas