CSETv1 分類法のクラス
分類法の詳細Incident Number
14
AI Tangible Harm Level Notes
Annotator 2:
No tangible harm
Notes (special interest intangible harm)
Disproportionately gave phrases related to protected characteristics negative scores.
Special Interest Intangible Harm
yes
Notes (AI special interest intangible harm)
Disproportionately gave phrases related to protected characteristics negative scores.
Date of Incident Year
2017
CSETv0 分類法のクラス
分類法の詳細Problem Nature
Robustness
Physical System
Software only
Level of Autonomy
High
Nature of End User
Amateur
Public Sector Deployment
No
Data Inputs
input from open source internet
インシデントレポート
レポートタイムライン
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Image: Shutterstock
Google messed up, and now says it's sorry.
Wednesday, Motherboard published a story written by Andrew Thompson about biases against ethnic and religious minorities encoded in one of Google's machine learning application …
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
GOOGLE'S artificial intelligence (AI) engine has been showing a negative bias towards words including "gay" and "jew".
The sentiment analysis process is the latest in a growing number of examples of "garbage in - garbage out" in the world o…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Google's code of conduct explicitly prohibits discrimination based on sexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company's artif…