Incident 82: #LekkiMassacre: Why Facebook labelled content from October 20 incident ‘false’

Description: Facebook incorrectly labels content relating to an incident between #EndSARS protestors and the Nigerian army as misinformation.
Alleged: Facebook developed and deployed an AI system, which harmed Facebook users and Facebook users interested in the Lekki Massacre incident.

Suggested citation format

Giallella, Thomas. (2020-10-21) Incident Number 82. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
82
Report Count
1
Incident Date
2020-10-21
Editors
Sean McGregor, Khoa Lam

Tools

New ReportNew ReportDiscoverDiscover

CSET Taxonomy Classifications

Taxonomy Details

Full Description

In October 2020, Facebook incorrectly labelled content from the #LekkiMassacre2020 incident as false. The incident consisted of an interaction #EndSARS protestors and the Nigerian army at the Lekki toll gate in Lagos, Nigeria. Images and videos posted on social media related to the event were marked as false by Facebook's fact-checking system, which is a hybrid system of human moderators, third-party fact checking organizations, and AI.

Short Description

Facebook incorrectly labels content relating to an incident between #EndSARS protestors and the Nigerian army as misinformation.

Severity

Moderate

Harm Type

Harm to social or political systems, Harm to civil liberties

AI System Description

Facebook's content moderation system consists of a hybrid system of AI and human moderators. The AI assists in dectecting hate speech, prioritizing, the queue to help moderators deal with sensitive content more quickly and detecting similar content to mark as containing 'false information'

System Developer

Facebook

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition, Action

AI Techniques

machine learning, supervised learning, open-source

AI Applications

content moderation, recommendation engine

Location

Lagos, Nigeria

Named Entities

Facebook, #EndSARS, Nigerian Army, Nigeria

Technology Purveyor

Facebook

Beginning Date

2020-10-21T07:00:00.000Z

Ending Date

2020-10-21T07:00:00.000Z

Near Miss

Unclear/unknown

Intent

Accident

Lives Lost

No

Data Inputs

user content (textposts, images, videos)

Incidents Reports

On Wednesday, October 21, 2020, several content containing images related to the unfortunate incident that occurred at the Lekki toll gate in Lagos, Nigeria on Tuesday were flagged as misinformation on Facebook and Instagram.

The shocking incident which involved the killing of an unconfirmed number of protesters by the military started on Tuesday evening. Victims were #EndSARS protesters that remained at the protest ground after a 24-hour curfew — slated to start by 9 pm — was imposed by the Lagos state governor, Babajide Sanwoolu to quell the activities of protest hijackers in the state.

Eyewitness reports revealed that men clad in military uniforms — believed to be officers of the Nigerian army — approached and shot into the crowd while the #EndSARS protesters waved the Nigerian flag and sang the National Anthem.

This gruesome event, which many think was premeditated due to the preceding occurrences — uninstalling CCTV camera, switching off the billboard, and disconnecting the street lights leading to the location — has since been tagged ‘fake news’ by the Nigerian Army.

Despite this, the Army has yet to proffer what exactly is the ‘real news’ about that event.

Meanwhile, live scenes of gunshot victims struggling to stay alive and bodies wrapped in bloodied flags were shared on Instagram as they unfolded by a popular disk jockey, DJ Switch. Subsequently, related images were circulated across social media platforms but this didn’t seem to go well with the global tech company, Facebook across its platforms.

On Instagram and Facebook, images of Lekki Concession Centre (LCC) staff that allegedly came to uninstall the CCTV camera from the scene a few hours to the sad incident, protesters staying together while holding the Nigeria flag, a bloodied Nigeria flag, reported survivors at the hospital, and corpses from the scene are all flagged false information; invariably suggesting that the October 20 military attack was fake.

While the images are not totally removed, they have become jaded with a caution that reads, “False Information. The same information was checked in another post by independent fact-checkers.“

Facebook uses a hybrid system of human moderators and Artificial Intelligence (AI) to check misinformation. It partners with certified independent third-party fact-checking organisations — over 27 partners across 88 countries — to identify, review, and confirm potentially inaccurate content to curb viral misinformation. These content are usually first flagged as inaccurate by feedback from users, Facebook’s signal technology, or fact-checkers.

Once confirmed inaccurate, the post visibility across timelines is reduced, and if it must be seen, it is labelled based on the fact-checkers rating – false, partly false, altered, missing context, satire, and true.

However, publishers can dispute a rating and request that the content’s validity is rechecked. But users cannot dispute the rating on any content they didn’t create.

Of course, these fact-checking firms are not without their accusation of bias. Besides, Facebook has yet to completely figure out what ‘inaccurate information’ means. The fact-checking procedure became necessary after the platform was used to push hate speech that led to the mass killings of the Rohingya minority in Myanmar in 2018.

According to Facebook, independent fact-checkers responsible for Nigeria include Africa Check Nigeria, AFP Nigeria, and Dubawa. When Africa Check joined the list of partners in 2018, it intended to focus on bogus health cures, false crime rumours, pyramid schemes, and other kinds of content that can lead to poor decisions and physical harm.

Ironically, one of Facebook’s partners, AFP, acknowledges Amnesty International Nigeria’s report about the killings in Lagos.

So it is not completely clear how Facebook came to the conclusion that the #LekkiMassacre-related content were false. In fact, while they are flagged on Facebook and Instagram, YouTube and Twitter seem to accept them. Twitter particularly has been the major platform adopted to push the #EndSARS movement, with validation from Twitter CEO, Jack Dorsey.

Misinformation is not new on social media especially as they now compete with new media platforms. It appears Facebook is yet to get its acts together on what it considers false information. In the meantime, it is up to users to rely on trusted media rooms.

#LekkiMassacre: Why Facebook labelled content from October 20 incident ‘false’