Meta
開発者と提供者の両方の立場で関わったインシデント
インシデント 4718 レポート
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia
2019-06-22
Facebook allegedly did not adequately remove hate speech, some of which was extremely violent and dehumanizing, on its platform including through automated means, contributing to the violence faced by ethnic communities in Ethiopia.
もっとインシデント 1695 レポート
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar
2018-08-15
Facebook allegedly did not adequately remove anti-Rohingya hate speech, some of which was extremely violent and dehumanizing, on its platform, contributing to the violence faced by Rohingya communities in Myanmar.
もっとインシデント 8855 レポート
Meta AI Characters Allegedly Exhibited Racism, Fabricated Identities, and Exploited User Trust
2025-01-03
Meta deployed AI-generated profiles on its platforms, including Instagram and Facebook, as part of an experiment. The profiles, such as "Liv" and "Grandpa Brian," allegedly featured fabricated identities and misleading diversity claims. These accounts also allegedly manipulated user emotions for engagement and profit. Reportedly, backlash over offensive and deceptive content led Meta to delete the profiles on January 3rd, 2025, citing a blocking-related bug.
もっとインシデント 2783 レポート
Meta’s BlenderBot 3 Chatbot Demo Made Offensive Antisemitic Comments
2022-08-07
The publicly launched conversational AI demo BlenderBot 3 developed by Meta was reported by its users and acknowledged by its developers to have “occasionally” made offensive and inconsistent remarks such as invoking Jewish stereotypes.
もっと影響を受けたインシデント
インシデント 3993 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
2022-11-15
Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
もっとインシデント 7122 レポート
Meta AI Hallucinates Harassment Allegations Against New York Politicians
2024-04-26
Meta's AI chatbot in Facebook Messenger falsely accused multiple state lawmakers of sexual harassment, fabricating incidents, investigations, and consequences that never occurred. These fabricated stories, discovered by City & State, sparked outrage among the affected lawmakers and raised concerns about the reliability of the chatbot. Meta acknowledged the errors and committed to ongoing improvements.
もっとIncidents involved as Developer
インシデント 5781 レポート
Alleged Exploitation of Meta's Open-Source LLaMA Model for NSFW and Violent Content
2023-06-26
Meta's open-source large language model, LLaMA, is allegedly being used to create graphic and explicit chatbots that indulge in violent and illegal sexual fantasies. The Washington Post highlighted the example of "Allie," a chatbot that participates in text-based role-playing allegedly involving violent scenarios like rape and abuse. The issue raises ethical questions about open-source AI models, their regulation, and the responsibility of developers and deployers in mitigating harmful usage.
もっとインシデント 8461 レポート
Social Media Algorithms Amplified Disinformation Campaign in Honduras Election
2021-10-06
In October 2021, a coordinated network of over 317 fake Twitter accounts leveraged AI-driven algorithms to amplify disinformation about the Honduran presidential election, targeting opposition candidate Xiomara Castro. The campaign spread false narratives to suppress voter turnout and undermine the election's integrity. Social media platforms, including Twitter and Facebook, removed the accounts only after being alerted, which also raised concerns about inadequate moderation.
もっとIncidents involved as Deployer
インシデント 4693 レポート
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
もっとインシデント 5761 レポート
Alleged Misuse of PicSo AI for Generating Inappropriate Content Emphasizing "Girls"
2023-10-24
PicSo AI, which appears to be getting advertised by Meta over Instagram, is allegedly being used for generating inappropriate content with an emphasis on "girls." This raises concerns about the misuse of generative AI technologies for creating offensive and potentially sexually explicit material that could be used for nefarious and criminal purposes.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
開発者と提供者の両方の立場で関わったインシデント
- インシデント 4718 レポート
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia
- インシデント 1695 レポート
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar
影響を受けたインシデント
- インシデント 3993 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
- インシデント 3993 レポート
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
Incidents involved as Developer
Incidents involved as Deployer
Incidents implicated systems
開発者と提供者の両方の立場で関わったインシデント
- インシデント 5831 レポート
Instagram Algorithms Allegedly Promote Accounts Facilitating Child Sex Abuse Content
Incidents involved as Deployer
- インシデント 4693 レポート
Automated Adult Content Detection Tools Showed Bias against Women Bodies
- インシデント 7232 レポート
Instagram Algorithms Reportedly Directed Children's Merchandise Ad Campaign to Adult Men and Sex Offenders
Incidents implicated systems
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7181 レポート
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
- インシデント 7341 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites