インシデント 7の引用情報

Description: Wikipedia bots meant to remove vandalism clash with each other and form feedback loops of repetitve undoing of the other bot's edits.
推定: Wikipediaが開発し提供したAIシステムで、Wikimedia Foundation , Wikipedia Editors Wikipedia Usersに影響を与えた

インシデントのステータス

インシデントID
7
レポート数
6
インシデント発生日
2017-02-24
エディタ
Sean McGregor

CSETv0 分類法のクラス

分類法の詳細

Full Description

Wikipedia bots meant to help edit articles through artificial intelligence clash with each other, undoing the other's edits repetitively. The bots are meant to remove vandalism on the open-source, open-input site, however they have begun to disagree with each other and form infintie feedback loops of correcting the other's edits. Two notable cases are the face off between Xqbot and Darnkessbot that has led to 3,629 edited articles between 2009-2010 and between Tachikoma and Russbot leading to more than 3,000 edits. These edits have occurred across articles in 13 languages on Wikipedia, with the most ocurring in Portuguese language articles and the least occurring in German language articles. The whole situation has been described as a "bot-on-bot editing war."

Short Description

Wikipedia bots meant to remove vandalism clash with each other and form feedback loops of repetitve undoing of the other bot's edits.

Severity

Negligible

Harm Type

Other:Harm to publicly available information

AI System Description

Wikipedia editing bots meant to remove vandalism on the site

System Developer

Wikipedia

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition, Action

AI Techniques

Content editing bot

AI Applications

AI content creation, AI content editing

Location

Global

Named Entities

Wikipedia

Technology Purveyor

Wikipedia

Beginning Date

2001-01-01T00:00:00.000Z

Ending Date

2010-01-01T00:00:00.000Z

Near Miss

Unclear/unknown

Intent

Accident

Lives Lost

No

Data Inputs

Wikipedia articles, edits from other bots

CSETv1 分類法のクラス

分類法の詳細

Harm Distribution Basis

none

Sector of Deployment

information and communication

Study reveals bot-on-bot editing wars raging on Wikipedia's pages
theguardian.com · 2017

For many it is no more than the first port of call when a niggling question raises its head. Found on its pages are answers to mysteries from the fate of male anglerfish, the joys of dorodango, and the improbable death of Aeschylus.

But ben…

People built AI bots to improve Wikipedia. Then they started squabbling in petty edit wars, sigh
theregister.co.uk · 2017

Analysis An investigation into Wikipedia bots has confirmed the automated editing software can be just as pedantic and petty as humans are – often engaging in online spats that can continue for years.

What's interesting is that bots behave …

Automated Wikipedia Edit-Bots Have Been Fighting Each Other For A Decade
huffingtonpost.com.au · 2017

It turns out Wikipedia's automated edit 'bots' have been waging a cyber-war between each other for over a decade by changing each other's corrections -- and it's getting worse.

Researchers at the University of Oxford in the United Kingdom r…

Wiki Bots That Feud for Years Highlight the Troubled Future of AI
seeker.com · 2017

Wiki Bots That Feud for Years Highlight the Troubled Future of AI

The behavior of bots is often unpredictable and sometimes leads them to produce errors over and over again in a potentially infinite feedback loop.

Internet Bots Fight Each Other Because They're All Too Human
wired.com · 2017

Getty Images

No one saw the crisis coming: a coordinated vandalistic effort to insert Squidward references into articles totally unrelated to Squidward. In 2006, Wikipedia was really starting to get going, and really couldn’t afford to have…

Danger, danger! 10 alarming examples of AI gone wild
infoworld.com · 2017

Science fiction is lousy with tales of artificial intelligence run amok. There's HAL 9000, of course, and the nefarious Skynet system from the "Terminator" films. Last year, the sinister AI Ultron came this close to defeating the Avengers, …

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents