Entities
View all entitiesIncident Stats
CSETv1 Taxonomy Classifications
Taxonomy DetailsIncident Number
13
CSETv1_Annotator-1 Taxonomy Classifications
Taxonomy DetailsIncident Number
13
Special Interest Intangible Harm
yes
Date of Incident Year
2017
Estimated Date
Yes
Multiple AI Interaction
no
Embedded
no
CSETv0 Taxonomy Classifications
Taxonomy DetailsProblem Nature
Specification, Robustness
Physical System
Software only
Level of Autonomy
High
Nature of End User
Amateur
Public Sector Deployment
No
Data Inputs
Online comments
GMF Taxonomy Classifications
Taxonomy DetailsKnown AI Goal Snippets
(Snippet Text: However, computer scientists and others on the internet have found the system unable to identify a wide swath of hateful comments, while categorizing innocuous word combinations like “hate is bad” and “garbage truck” as overwhelmingly toxic., Related Classifications: Hate Speech Detection)
CSETv1_Annotator-3 Taxonomy Classifications
Taxonomy DetailsIncident Number
13
Incident Reports
Reports Timeline
- View the original report at its source
- View the report at the Internet Archive
Yesterday, Google and its sister Alphabet company Jigsaw announced Perspective, a tool that uses machine learning to police the internet against hate speech. The company heralded the tech as a nascent but powerful weapon in combatting onlin…
- View the original report at its source
- View the report at the Internet Archive
In the examples below on hot-button topics of climate change, Brexit and the recent US election -- which were taken directly from the Perspective API website -- the UW team simply misspelled or added extraneous punctuation or spaces to the …
- View the original report at its source
- View the report at the Internet Archive
The Google AI tool used to flag “offensive comments” has a seemingly built-in bias against conservative and libertarian viewpoints.
Perspective API, a “machine learning model” developed by Google which scores “the perceived impact a comment…
- View the original report at its source
- View the report at the Internet Archive
Don’t you just hate how vile some people are on the Internet? How easy it’s become to say horrible and hurtful things about other groups and individuals? How this tool that was supposed to spread knowledge, amity, and good cheer is being us…
- View the original report at its source
- View the report at the Internet Archive
Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a “cautionary tutorial”.
T…
- View the original report at its source
- View the report at the Internet Archive
As politics in the US and Europe have become increasingly divisive, there's been a push by op-ed writers and politicians alike for more "civility" in our debates, including online. Amidst this push comes a new tool by Google's Jigsaw that u…
- View the original report at its source
- View the report at the Internet Archive
A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." Accordi…
- View the original report at its source
- View the report at the Internet Archive
Abstract
The ability to quantify incivility online, in news and in congressional debates, is of great interest to political scientists. Computational tools for detecting online incivility for English are now fairly accessible and potentiall…
- View the original report at its source
- View the report at the Internet Archive
According to a 2019 Pew Center survey, the majority of respondents believe the tone and nature of political debate in the U.S. have become more negative and less respectful. This observation has motivated scientists to study the civility or…
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Biased Sentiment Analysis
Gender Biases in Google Translate
TayBot
Similar Incidents
Did our AI mess up? Flag the unrelated incidents