インシデントのステータス
CSETv1 分類法のクラス
分類法の詳細Incident Number
115
Special Interest Intangible Harm
no
Notes (AI special interest intangible harm)
There is no evidence or indication that the system led to any special interest intangible harms through its use or deployment.
Date of Incident Year
2020
Date of Incident Month
07
Date of Incident Day
CSETv1_Annotator-1 分類法のクラス
分類法の詳細Incident Number
115
Special Interest Intangible Harm
yes
Notes (AI special interest intangible harm)
It is unclear that any harmed entities can be characterized in this incident. Although the AI did exhibit bias against women, the extent of the harm stopped at the ineffectiveness of the tool.
Date of Incident Year
2020
CSETv1_Annotator-3 分類法のクラス
分類法の詳細Incident Number
115
インシデントレポート
レポートタイムライン
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Some tech companies make a splash when they launch, others seem to bellyflop.
Genderify, a new service that promised to identify someone’s gender by analyzing their name, email address, or username with the help AI, looks firmly to be in th…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Just hours after making waves and triggering a backlash on social media, Genderify — an AI-powered tool designed to identify a person’s gender by analyzing their name, username or email address — has been completely shut down.
Launched last…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
The creators of a controversial tool that attempted to use AI to predict people's gender from their internet handle or email address have shut down their service after a huge backlash.
The Genderify app launched this month, and invited peop…
バリアント
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents