Description: Facebook Messenger AI stickers, a feature by Meta, allows users to generate personalized stickers via AI for use in conversations. While the feature has been praised for its creativity, it has also stirred controversy for its alleged production of inappropriate or offensive content. This has raised questions about the effectiveness of Meta's content moderation measures and the ethical responsibilities associated with AI-driven content generation.
推定: Metaが開発し提供したAIシステムで、Facebook Messenger usersに影響を与えた
インシデントのステータス
CSETv1_Annotator-2 分類法のクラス
分類法の詳細Incident Number
The number of the incident in the AI Incident Database.
570
Special Interest Intangible Harm
An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
maybe
Notes (AI special interest intangible harm)
If for 5.5 you select unclear or leave it blank, please provide a brief description of why.
You can also add notes if you want to provide justification for a level.
While the stickers are inappropriate and unethical, it is difficult to determine if they amount to the definition of detrimental content based on the examples shown in the report.
Date of Incident Year
The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank.
Enter in the format of YYYY
2023
インシデントレポート
レポートタイムライン
theverge.com · 2023
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Some early user tests for Meta’s new AI-generated sticker tool have resulted in some dubious (and rather hilarious) creations. After gaining access to the new AI-generated sticker tool on Facebook Messenger, X user @Pioldes was able to crea…
バリアント
「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデ ントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents