Incident 197: Facebook Internally Reported Failure of Ranking Algorithm, Exposing Harmful Content to Viewers over Months

Description: Facebook's internal report showed an at-least six-month long alleged software bug that caused moderator-flagged posts and other harmful content to evade down-ranking filters, leading to surges of misinformation on users' News Feed.
Alleged: Facebook developed and deployed an AI system, which harmed Facebook users.

Suggested citation format

AIAAIC. (2021-10-01) Incident Number 197. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
197
Report Count
4
Incident Date
2021-10-01
Editors
Sean McGregor, Khoa Lam

Tools

New ReportNew ReportDiscoverDiscover

Incidents Reports

A group of Facebook engineers identified a “massive ranking failure” that exposed as much as half of all News Feed views to potential “integrity risks” over the past six months, according to an internal report on the incident obtained by The Verge.

The engineers first noticed the issue last October, when a sudden surge of misinformation began flowing through the News Feed, notes the report, which was shared inside the company last week. Instead of suppressing posts from repeat misinformation offenders that were reviewed by the company’s network of outside fact-checkers, the News Feed was instead giving the posts distribution, spiking views by as much as 30 percent globally. Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11th.

In addition to posts flagged by fact-checkers, the internal investigation found that, during the bug period, Facebook’s systems failed to properly demote probable nudity, violence, and even Russian state media the social network recently pledged to stop recommending in response to the country’s invasion of Ukraine. The issue was internally designated a level-one SEV, or site event — a label reserved for high-priority technical crises, like Russia’s ongoing block of Facebook and Instagram.

Meta spokesperson Joe Osborne confirmed the incident in a statement to The Verge, saying the company “detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics.” The internal documents said the technical issue was first introduced in 2019 but didn’t create a noticeable impact until October 2021. “We traced the root cause to a software bug and applied needed fixes,” said Osborne, adding that the bug “has not had any meaningful, long-term impact on our metrics” and didn’t apply to content that met its system’s threshold for deletion.

For years, Facebook has touted downranking as a way to improve the quality of the News Feed and has steadily expanded the kinds of content that its automated system acts on. Downranking has been used in response to wars and controversial political stories, sparking concerns of shadow banning and calls for legislation. Despite its increasing importance, Facebook has yet to open up about its impact on what people see and, as this incident shows, what happens when the system goes awry.

In 2018, CEO Mark Zuckerberg explained that downranking fights the impulse people have to inherently engage with “more sensationalist and provocative” content. “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content,” he wrote in a Facebook post at the time.

Downranking not only suppresses what Facebook calls “borderline” content that comes close to violating its rules but also content its AI systems suspect as violating but needs further human review. The company published a high-level list of what it demotes last September but hasn’t peeled back how exactly demotion impacts distribution of affected content. Officials have told me they hope to shed more light on how demotions work but have concern that doing so would help adversaries game the system.

In the meantime, Facebook’s leaders regularly brag about how their AI systems are getting better each year at proactively detecting content like hate speech, placing greater importance on the technology as a way to moderate at scale. Last year, Facebook said it would start downranking all political content in the News Feed — part of CEO Mark Zuckerberg’s push to return the Facebook app back to its more lighthearted roots.

I’ve seen no indication that there was malicious intent behind this recent ranking bug that impacted up to half of News Feed views over a period of months, and thankfully, it didn’t break Facebook’s other moderation tools. But the incident shows why more transparency is needed in internet platforms and the algorithms they use, according to Sahar Massachi, a former member of Facebook’s Civic Integrity team.

“In a large complex system like this, bugs are inevitable and understandable,” Massachi, who is now co-founder of the nonprofit Integrity Institute, told The Verge. “But what happens when a powerful social platform has one of these accidental faults? How would we even know? We need real transparency to build a sustainable system of accountability, so we can help them catch these problems quickly.”

Clarification at 6:56 PM ET: Specified with confirmation from Facebook that accounts designated as repeat misinformation offenders saw their views spike by as much as 30%, and that the bug didn’t impact the company’s ability to delete content that explicitly violated its rules.

Correction at 7:25 PM ET: Story updated to note that “SEV” stands for “site event” and not “severe engineering vulnerability,” and that level-one is not the worst crisis level. There is a level-zero SEV used for the most dramatic emergencies, such as a global outage. We regret the error.

A Facebook bug led to increased views of harmful content over six months

For the last six months, Facebook engineers have been seeing intermittent spikes in misinformation and other harmful content on News Feed, with posts that would usually be demoted by the company's algorithms being boosted by as much as 30% instead. The cause, according to reporting by The Verge, was a bug that one internal report described as a “massive ranking failure.”

The bug first originated in 2019, but its impact was first noticed in October 2021. The company said it was resolved March 11. “We traced the root cause to a software bug and applied needed fixes,” Meta spokesperson Joe Osborne told The Verge.

The bug caused posts that had been flagged by fact-checkers, as well as nudity, violence and Russian state media, to slip through the company's usual down-ranking filters, according to an internal report obtained by The Verge.

Meta and other tech giants have leaned on down-ranking as a more palatable approach to content moderation than removing content altogether. Scholars like Stanford's Renée DiResta have also called on tech giants to embrace this approach and realize that "free speech is not the same as free reach."

In this case, those ranking systems appear to have failed. But Osborne told The Verge the bug “has not had any meaningful, long-term impact on our metrics.”

It will be difficult for those outside of Meta to vet those metrics. Meta has blocked new users from accessing CrowdTangle, one of the core tools researchers and journalists have used to track trends in what's popular on Facebook, and has dismantled the team leading it. And while the company does release reports on the prevalence of certain kinds of policy violations in any given quarter, those reports offer little indication of what's behind those numbers. Even if the report did show an uptick in, say, violence on Facebook, it'd be impossible to know if that's due to this bug or to Russia's invasion of Ukraine or some other global atrocity.

The company in a statement to Protocol said:

"The Verge vastly overstated what this bug was because ultimately it had no meaningful, long-term impact on problematic content. Only a very small number of views of content in Feed were ever impacted because the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place. After detecting inconsistencies we found the root cause and quickly applied fixes. Even without the fixes, the multitude of other mechanisms we have to keep people from seeing harmful content — including other demotions, fact-checking labels and violating content removals — remained in place.”

But it's still unclear which posts were boosted due to the bug or how many views they received.

Facebook boosted harmful posts due to 'massive ranking failure' bug

Meta has admitted that a Facebook bug led to a 'surge of misinformation' and other harmful content appearing in users' News Feeds between October and March.

According to an internal document, engineers at Mark Zuckerberg's firm failed to suppress posts from 'repeat misinformation offenders' for almost six months.

During the period, Facebook systems also likely failed to properly demote nudity, violence and Russian state media during the war on Ukraine, the document said.

Meta reportedly designated the issue a 'level 1 site event' – a label reserved for high-priority technical crises, like Russia's block of Facebook and Instagram.

MailOnline has contacted Meta for comment, although the firm reportedly confirmed the six-month long bug to the Verge.

This was only after the Verge obtained the internal Meta document, which was shared inside the company last week.

'[Meta] detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics,' said Meta spokesperson Joe Osborne.

'We traced the root cause to a software bug and applied needed fixes. [The bug] has not had any meaningful, long-term impact on our metrics.'

Meta engineers first noticed the issue last October, when a sudden surge of misinformation began flowing through News Feeds.

This misinformation came from 'repeat offenders' – users who repeatedly share posts that have been deemed as misinformation by a team of human fact-checkers.

'Instead of suppressing posts from repeat misinformation offenders that were reviewed by the company’s network of outside fact-checkers, the News Feed was instead giving the posts distribution,' the Verge reports.

Facebook accounts that had been designated as repeat 'misinformation offenders' saw their views spike by as much as 30 percent.

Unable to find the cause, Meta engineers had to just watch the surge subside a few weeks later and then flare up repeatedly over the next six months.

The issue was finally fixed three weeks ago, on March 11, according to the internal document.

Meta said the bug didn’t impact the company's ability to delete content that violated its rules.

According to Sahar Massachi, a former member of Facebook’s Civic Integrity team, Meta's issue just highlights why more transparency is needed in internet platforms and the algorithms they use.

'In a large complex system like this, bugs are inevitable and understandable,' he said.

'But what happens when a powerful social platform has one of these accidental faults? How would we even know?

'We need real transparency to build a sustainable system of accountability, so we can help them catch these problems quickly.'

Last May, Meta (known then as Facebook) said it would take stronger action against repeat misinformation offenders, in the form of penalties such as account restrictions.

'Whether it's false or misleading content about Covid-19 and vaccines, climate change, elections, or other topics, we're making sure fewer people see misinformation on our apps,' the firm said in a blog post.

'We will reduce the distribution of all posts in News Feed from an individual's Facebook account if they repeatedly share content that has been rated by one of our fact-checking partners.'

Last year, the firm said it would start downranking all political content on Facebook – a decision taken based on feedback from users who 'don’t want political content to take over their News Feed'.

Meta renamed itself in October, as part of its long-term project to turn its social media platform into a metaverse – a collective virtual shared space featuring avatars of real people.

In the future, the social media platform will be accessible within the metaverse using virtual reality (VR) and augmented reality (AR) headsets and smart glasses.

Meta admits Facebook bug led to a 'surge of misinformation'

Facebook engineers have belatedly uncovered a significant flaw in its downranking system to filter out harmful content, which exposed up to half of all News Feed views to potential ’integrity risks’ for six months.

Reports in The Verge suggest the ‘massive ranking failure’ was first identified last October when engineers battled against a wave of misinformation that threatened to inundate the News Feed. Closer investigations revealed that a ranking system designed to suppress misinformation from flagged accounts, as identified by a team of external fact-checkers, was instead surfacing these posts to audiences.

Leaked correspondence suggests the bug boosted views of malign posts by as much as 30% intermittently until the issue was finally resolved on March 11.

Throughout this six-month period, Facebook’s much-vaunted policing algorithms failed to properly downrank nudity, violence and Russian state propaganda – a period that overlapped with the country’s invasion of Ukraine.

Fielding inquiries from The Verge, Meta spokesperson Joe Osborne described five separate instances of ”inconsistencies in downranking” attributed to a ”software bug” during which inappropriate material was given raised visibility.

Osborne insists however that the episode ”has not had any meaningful, long-term impact on our metrics,” stressing that content that passed the threshold for deletion was not affected.

The system of downranking has been touted by Facebook as evidence that self-regulation is effective, heading off calls for new legislation to curb the spread of ‘sensationalist and provocative’ content that typically attracts the most attention.

Until now, Facebook has boasted of the success its algorithms have had identifying ‘borderline’ content that skirts the boundaries of acceptability in areas such as hate speech, flagging suspected infractions for manual review.

A recent report found that hate speech was present in six out of every 10,000 Facebook views.

Facebook system designed to smother harmful misinformation actually spread it