Description: An investigation disclosed that Instagram's recommendation algorithms are promoting accounts that facilitate and sell child sexual abuse material (CSAM). The study, conducted by The Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst, indicates that Instagram's algorithms not only allow for the discovery of such accounts through keyword searches but also actively recommend them to users within the network. The issue is especially concerning given Instagram's popularity among teenagers.
インシデントのステータス
CSETv1 分類法のクラス
分類法の詳細Incident Number
The number of the incident in the AI Incident Database.
583
Notes (special interest intangible harm)
Input any notes that may help explain your answers.
4.2 - Child sexual abuse content is illegal
Special Interest Intangible Harm
An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
yes
Date of Incident Year
The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank.
Enter in the format of YYYY
2023
Date of Incident Month
The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank.
Enter in the format of MM
06
Date of Incident Day
The day on which the incident occurred. If a precise date is unavailable, leave blank.
Enter in the format of DD
CSETv1_Annotator-1 分類法のクラス
分類法の詳細Incident Number
The number of the incident in the AI Incident Database.
583
Notes (special interest intangible harm)
Input any notes that may help explain your answers.
4.2 - Child sexual abuse content is illegal
Special Interest Intangible Harm
An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
yes
Date of Incident Year
The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank.
Enter in the format of YYYY
2023
Date of Incident Month
The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank.
Enter in the format of MM
06
Date of Incident Day
The day on which the incident occurred. If a precise date is unavailable, leave blank.
Enter in the format of DD
インシデントレポート
レポートタイムライン
cnbc.com · 2023
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Instagram's recommendation algorithms have been connecting and promoting accounts that facilitate and sell child sexual abuse content, according to an investigation published Wednesday.
Meta's photo-sharing service stands out from other soc…
バリアント
「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください