開発者と提供者の両方の立場で関わったインシデント
インシデント 7226 レポート
Facebook translates 'good morning' into 'attack them', leading to arrest
2017-10-17
Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel.
もっとインシデント 4718 レポート
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia
2019-06-22
Facebook allegedly did not adequately remove hate speech, some of which was extremely violent and dehumanizing, on its platform including through automated means, contributing to the violence faced by ethnic communities in Ethiopia.
もっとインシデント 2846 レポート
Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship
2018-05-01
Facebook’s removal of posts featuring renowned artworks by many historical artists and their promotional content due to nudity via both automated and human-moderated means were condemned by critics, such as museums and tourism boards, as cultural censorship and prevention of artwork promotion.
もっとインシデント 1695 レポート
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar
2018-08-15
Facebook allegedly did not adequately remove anti-Rohingya hate speech, some of which was extremely violent and dehumanizing, on its platform, contributing to the violence faced by Rohingya communities in Myanmar.
もっと影響を受けたインシデント
インシデント 3993 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
2022-11-15
Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
もっとIncidents involved as Deployer
インシデント 4693 レポート
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
もっとインシデント 7581 レポート
Teen's Overdose Reportedly Linked to Meta's AI Systems Failing to Block Ads for Illegal Drugs
2023-09-11
Meta's AI moderation systems reportedly failed to block ads for illegal drugs on Facebook and Instagram, allowing users to access dangerous substances. The system's failure is linked to the overdose death of Elijah Ott, a 15-year-old boy who sought drugs through Instagram.
もっと関連する組織
開発者と提供者の両方の立場で関わったインシデント
- インシデント 3432 レポート
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
- インシデント 1421 レポート
Facebook’s Advertisement Moderation System Routinely Misidentified Adaptive Fashion Products as Medical Equipment and Blocked Their Sellers
Incidents involved as Deployer
Meta
開発者と提供者の両方の立場で関わったインシデント
- インシデント 4718 レポート
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia
- インシデント 1695 レポート
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar
影響を受けたインシデント
- インシデント 3993 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
- インシデント 3993 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content