組織

Meta

開発者と提供者の両方の立場で関わったインシデント

インシデント 4718 レポート
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia

2019-06-22

Facebook allegedly did not adequately remove hate speech, some of which was extremely violent and dehumanizing, on its platform including through automated means, contributing to the violence faced by ethnic communities in Ethiopia.

Más

インシデント 1695 レポート
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

2018-08-15

Facebook allegedly did not adequately remove anti-Rohingya hate speech, some of which was extremely violent and dehumanizing, on its platform, contributing to the violence faced by Rohingya communities in Myanmar.

Más

インシデント 3994 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

2022-11-15

Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.

Más

インシデント 2783 レポート
Meta’s BlenderBot 3 Chatbot Demo Made Offensive Antisemitic Comments

2022-08-07

The publicly launched conversational AI demo BlenderBot 3 developed by Meta was reported by its users and acknowledged by its developers to have “occasionally” made offensive and inconsistent remarks such as invoking Jewish stereotypes.

Más

影響を受けたインシデント

インシデント 3994 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

2022-11-15

Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.

Más

Incidents involved as Developer

インシデント 5781 レポート
Alleged Exploitation of Meta's Open-Source LLaMA Model for NSFW and Violent Content

2023-06-26

Meta's open-source large language model, LLaMA, is allegedly being used to create graphic and explicit chatbots that indulge in violent and illegal sexual fantasies. The Washington Post highlighted the example of "Allie," a chatbot that participates in text-based role-playing allegedly involving violent scenarios like rape and abuse. The issue raises ethical questions about open-source AI models, their regulation, and the responsibility of developers and deployers in mitigating harmful usage.

Más

Incidents involved as Deployer

インシデント 4693 レポート
Automated Adult Content Detection Tools Showed Bias against Women Bodies

2006-02-25

Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.

Más

インシデント 5761 レポート
Alleged Misuse of PicSo AI for Generating Inappropriate Content Emphasizing "Girls"

2023-10-24

PicSo AI, which appears to be getting advertised by Meta over Instagram, is allegedly being used for generating inappropriate content with an emphasis on "girls." This raises concerns about the misuse of generative AI technologies for creating offensive and potentially sexually explicit material that could be used for nefarious and criminal purposes.

Más

関連する組織