Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2129

関連インシデント

インシデント 3621 Report
Facebook's Automated Moderation Flagged Gardening Group's Language Use by Mistake

Loading...
Hoe no! Facebook snafu spells trouble for gardening group
apnews.com · 2021

Moderating a Facebook gardening group in western New York is not without challenges. There are complaints of wooly bugs, inclement weather and the novice members who insist on using dish detergent on their plants.

And then there’s the word “hoe.”

Facebook’s algorithms sometimes flag this particular word as “violating community standards,” apparently referring to a different word, one without an “e” at the end that is nonetheless often misspelled as the garden tool.

Normally, Facebook’s automated systems will flag posts with offending material and delete them. But if a group’s members — or worse, administrators — violate the rules too many times, the entire group can get shut down.

Elizabeth Licata, one of the group’s moderators, was worried about this. After all, the group, WNY Gardeners, has more than 7,500 members who use it to get gardening tips and advice. It’s been especially popular during the pandemic when many homebound people took up gardening for the first time.

A hoe by any other name could be a rake, a harrow or a rototill. But Licata was not about to ban the word from the group, or try to delete each instance. When a group member commented “Push pull hoe!” on a post asking for “your most loved & indispensable weeding tool,” Facebook sent a notification that said “We reviewed this comment and found it goes against our standards for harassment and bullying.”

Facebook uses both human moderators and artificial intelligence to root out material that goes against its rules. In this case, a human likely would have known that a hoe in a gardening group is likely not an instance of harassment or bullying. But AI is not always good at context and the nuances of language.

It also misses a lot — users often complain that they report violent or abusive language and Facebook rules that it’s not in violation of its community standards. Misinformation on vaccines and elections has been a long-running and well-documented problem for the social media company. On the flip side are groups like Licata’s that get caught up in overly zealous algorithms.

“And so I contacted Facebook, which was useless. How do you do that?” she said. “You know, I said this is a gardening group, a hoe is gardening tool.”

Licata said she never heard from a person and Facebook, and found navigating the social network’s system of surveys and ways to try to set the record straight was futile.

Contacted by The Associated Press, a Facebook representative said in an email this week that the company found the group and corrected the mistaken enforcements. It also put an extra check in place, meaning that someone — an actual person — will check offending posts before the group is considered for deletion. The company would not say if other gardening groups had similar problems. (In January, Facebook mistakenly flagged the U.K. landmark of Plymouth Hoe as offensive, then apologized, according to The Guardian.)

“We have plans to build out better customer support for our products and to provide the public with even more information about our policies and how we enforce them,” Facebook said in a statement in response to Licata’s complaints.

Then, something else came up. Licata received a notification that Facebook automatically disabled commenting on a post because of “possible violence, incitement, or hate in multiple comments.”

The offending comments included “Kill them all. Drown them in soapy water,” and “Japanese beetles are jerks.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd