インシデントのステータス
CSETv1 分類法のクラス
分類法の詳細Incident Number
103
Special Interest Intangible Harm
yes
Notes (AI special interest intangible harm)
The cropping neutral network would crop the preview image in way that focused more on individuals with lighter completions, younger, female, or without disabilities.
Date of Incident Year
2020
CSETv1_Annotator-2 分類法のクラス
分類法の詳細Incident Number
103
AI Tangible Harm Level Notes
No tangible harm occured.
Special Interest Intangible Harm
yes
Notes (AI special interest intangible harm)
The cropping neutral network would crop the preview image in way that focused on individuals with lighter completions.
Date of Incident Year
2020
Date of Incident Month
09
GMF 分類法のクラス
分類法の詳細Known AI Goal Snippets
(Snippet Text: Twitter‘s algorithm for automatically cropping images attached to tweets often doesn’t focus on the important content in them. , Related Classifications: Image Cropping)
CSETv1_Annotator-3 分類法のクラス
分類法の詳細Incident Number
103
AI Tangible Harm Level Notes
Intangible harm
Special Interest Intangible Harm
yes
Date of Incident Year
2021
Date of Incident Month
5
Estimated Date
Yes
インシデントレポート
レポートタイムライン
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Twitter‘s algorithm for automatically cropping images attached to tweets often doesn’t focus on the important content in them. A bother, for sure, but it seems like a minor one on the surface. However, over the weekend, researchers found th…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
A study of 10,000 images found bias in what the system chooses to highlight. Twitter has stopped using it on mobile, and will consider ditching it on the web.
LAST FALL, CANADIAN student Colin Madland noticed that Twitter’s automatic croppi…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
In October 2020, we heard feedback from people on Twitter that our image cropping algorithm didn’t serve all people equitably. As part of our commitment to address this issue, we also shared that we'd analyze our model again for bias. Over …
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Twitter has laid out plans for a bug bounty competition with a difference. This time around, instead of paying researchers who uncover security issues, Twitter will reward those who find as-yet undiscovered examples of bias in its image-cro…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Twitter's first bounty program for AI bias has wrapped up, and there are already some glaring issues the company wants to address. CNET reports that grad student Bogdan Kulynych has discovered that photo beauty filters skew the Twitter sali…
バリアント
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents