
Executive Summary
The ecosystem of support for Harakaat al-Shabaab al-Mujahideen (al-Shabaab) and the Islamic State in Africa runs across the open web, encrypted messaging applications, niche platforms, and straight through Facebook, unbothered by moderation in languages that have long proved problematic for the platform (Image 1). While much of the research focus on terrorist attacks in Africa has been on the operational capabilities of al-Shabaab to strike in East Africa, and the Islamic State's rise across the African continent, there remains a dearth of research into the al-Shabaab and Islamic State digital propaganda machinery and their Africa-focused narratives.
Researchers at the Institute for Strategic Dialogue (ISD) led a two-year investigation into the online media ecosystem of al-Shabaab and the Islamic State, analyzing the role of "independent news" outlets and their intersections with hundreds-strong networks of amplifier profiles on Facebook linked to a number of central pages identifying themselves as "media outlets" or "media personalities" operating in Somali, Kiswahili and Arabic. Researchers found that the network of support for al-Shabaab and Islamic State extended across several platforms, including decentralized messaging applications such as Element and RocketChat, and encrypted messaging platforms such as Telegram, as well as Twitter, YouTube and Facebook.
A qualitative cross-plaform analysis showed the most active, networked, and multilingual ecosystem of support for al-Shabaab and the Islamic State existed on Facebook, where profiles and pages classified as "media outlets" were sharing terrorist content openly, and eschewing private groups and profiles. The content that ISD researchers observed through the networks is often linked to "media" and "media personality" pages in Somali, Kiswahili and Arabic, and not only violates the platform's community guidelines, but also points to language moderation blind spots that have been previously documented by journalists as well as whistleblowers.
These language gaps continue to fluster Facebook moderation, despite the company's increased investment in moderation. In October of last year, internal Facebook documents released to the public for the first time indicated the platform lagged behind in its ability to effectively moderate languages in "at-risk" countries such as Iraq, Ethiopia, India and Pakistan. In Afghanistan for instance, Facebook researchers claimed finding accurate translations of Pashto and Dari undercut effective moderation. Arabic, and its regional variations and dialects, was of similar concern to Facebook. ISD research has previously shown just how Arabic conspiracies8 and terror content flummoxed moderators and moderation efforts. Facebook has attempted to step moderation of Arabic, based on both the revelations and indications from the internal documents released to improve those efforts in a number of languages.
Yet, even with the increased scrutiny on the platform's moderation efforts in languages outside of English, what ISD research indicates is that language moderation gaps not only play into the hands of governments conducting human rights abuses or spreading hate speech, but are similarly resulting in brazenly open displays of support for terror groups such as al-Shabaab and the Islamic State (Image 3). Emblematic of this issue, researchers found a Somali-language "media outlet" shared four official al-Shabaab videos through its public page during a three-week stretch of October 2021, collectively garnering 53,300 views, and 17,800 shares. These videos carried al-Shabaab's official media outlet branding and were in no shape or form disguised to get past moderators, and yet managed to stay on the platform for months. This report is an attempt to understand gaps in moderation and the tactics to evade moderation dynamic, and the networks of terror supporting profiles and pages that sit at the core of the issue.
Recommendations
The networks of Somali, Kiswahili and Arabic pages and profiles on Facebook supportive of al-Shabaab and the Islamic State are a case study into the moderation blind spots of the platform in languages outside of English and illustrate just how terrorist supporters are expoliting these gaps (Image 8). This issue of language parity in moderation efforts is not new, and in fact, has been a topic of discussion for 5 years, and has seen the challenges become more acute in periods of conflict and civil strife in locales all over the world. Even with the use of the algorthrims to root out terror content in other languages, Facebook and others have also mistakenly classified non-terrorist content, which points to another issue when it relates to lingusitic moderation issues. It is not Facebook only that faces this challange, but all of the platforms that are currently grappling with hate, polarization and terrorism. Facebook happens to be among the largest with high reach in certain geographic contexts, such as Kenya, Somalia and neighboring East African countries.
Moderation gaps in identifying terrorist content affiliated with al-Shabaab's primary media outlet brand is staying on the platform untouched, sometimes for years (Image 9). Enhancing the ability of the artificial intelligence used to find this content would undoubtedly result in more of this content being taken down. However, this issue will not be solved by technology alone, and improving the training of moderators, or entities being outsourced to moderate Facebook, will also result in the ability of this content to be flagged and ultimately stripped from the platform.