Incident 7: Wikipedia Vandalism Prevention Bot Loop

Description: Wikipedia bots meant to remove vandalism clash with each other and form feedback loops of repetitve undoing of the other bot's edits.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Wikipedia developed and deployed an AI system, which harmed Wikimedia Foundation , Wikipedia Editors and Wikipedia Users.

Incident Stats

Incident ID
7
Report Count
6
Incident Date
2017-02-24
Editors
Sean McGregor

CSETv0 Taxonomy Classifications

Taxonomy Details

Full Description

Wikipedia bots meant to help edit articles through artificial intelligence clash with each other, undoing the other's edits repetitively. The bots are meant to remove vandalism on the open-source, open-input site, however they have begun to disagree with each other and form infintie feedback loops of correcting the other's edits. Two notable cases are the face off between Xqbot and Darnkessbot that has led to 3,629 edited articles between 2009-2010 and between Tachikoma and Russbot leading to more than 3,000 edits. These edits have occurred across articles in 13 languages on Wikipedia, with the most ocurring in Portuguese language articles and the least occurring in German language articles. The whole situation has been described as a "bot-on-bot editing war."

Short Description

Wikipedia bots meant to remove vandalism clash with each other and form feedback loops of repetitve undoing of the other bot's edits.

Severity

Negligible

Harm Type

Other:Harm to publicly available information

AI System Description

Wikipedia editing bots meant to remove vandalism on the site

System Developer

Wikipedia

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition, Action

AI Techniques

Content editing bot

AI Applications

AI content creation, AI content editing

Location

Global

Named Entities

Wikipedia

Technology Purveyor

Wikipedia

Beginning Date

2001-01-01T00:00:00.000Z

Ending Date

2010-01-01T00:00:00.000Z

Near Miss

Unclear/unknown

Intent

Accident

Lives Lost

No

Data Inputs

Wikipedia articles, edits from other bots

CSETv1 Taxonomy Classifications

Taxonomy Details

Harm Distribution Basis

none

Sector of Deployment

information and communication

Study reveals bot-on-bot editing wars raging on Wikipedia's pages
theguardian.com · 2017

For many it is no more than the first port of call when a niggling question raises its head. Found on its pages are answers to mysteries from the fate of male anglerfish, the joys of dorodango, and the improbable death of Aeschylus.

But ben…

People built AI bots to improve Wikipedia. Then they started squabbling in petty edit wars, sigh
theregister.co.uk · 2017

Analysis An investigation into Wikipedia bots has confirmed the automated editing software can be just as pedantic and petty as humans are – often engaging in online spats that can continue for years.

What's interesting is that bots behave …

Automated Wikipedia Edit-Bots Have Been Fighting Each Other For A Decade
huffingtonpost.com.au · 2017

It turns out Wikipedia's automated edit 'bots' have been waging a cyber-war between each other for over a decade by changing each other's corrections -- and it's getting worse.

Researchers at the University of Oxford in the United Kingdom r…

Wiki Bots That Feud for Years Highlight the Troubled Future of AI
seeker.com · 2017

Wiki Bots That Feud for Years Highlight the Troubled Future of AI

The behavior of bots is often unpredictable and sometimes leads them to produce errors over and over again in a potentially infinite feedback loop.

Internet Bots Fight Each Other Because They're All Too Human
wired.com · 2017

Getty Images

No one saw the crisis coming: a coordinated vandalistic effort to insert Squidward references into articles totally unrelated to Squidward. In 2006, Wikipedia was really starting to get going, and really couldn’t afford to have…

Danger, danger! 10 alarming examples of AI gone wild
infoworld.com · 2017

Science fiction is lousy with tales of artificial intelligence run amok. There's HAL 9000, of course, and the nefarious Skynet system from the "Terminator" films. Last year, the sinister AI Ultron came this close to defeating the Avengers, …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents