Facebook’s engagement algorithms are built to amplify and profit from content that generates strong responses from users. And in the days just after the 2020 US election, then-president Donald Trump was spreading the toxic lies of the “Stop the Steal” campaign to millions over social media.
In this instance, Facebook caved to external and internal pressure to do more to stop the proliferation of election disinformation on its many platforms. For five days after the election, Facebook’s News Feed and other features looked very different. They prioritized, or “upranked,” more-credible news sources in an effort to drown out hateful and untrue content. As we know from the testimony of whistleblower Frances Haugen, Facebook’s leadership implemented “break-glass” measures that are designed to slow the spread of the worst disinformation during important elections and times of crisis.
These safeguards were too little, too late. And they were only temporary. Just days after these measures took effect, Facebook (now Meta) shut down its civic-integrity team and its amplification engine returned to business as usual. This happened just as Trump’s Big Lie about the election outcome was gaining traction online among conspiracy theorists and far-right extremists planning a violent attack on Congress.
The online lies that helped incite the Jan. 6 insurrection share unsettling similarities with the information war now being waged over Russia’s unjustified invasion of Ukraine. The current situation is even more dangerous, given both the extent of Russian state-sponsored disinformation and the horrors and casualties of the war. Meta is again grappling with a disinformation deluge. And its response to Russian propaganda has been as inconsistent and patchy as ever.
Pleas from civil-, digital-, and human-rights groups to protect democratic values around the world have been met with indifference at Meta and other major online platforms. In the conflict between the public good and corporate profits, profits win each time. They do so regardless of the resulting harm. Profiting from hate and misinformation appears to be hardwired into Meta’s business model. This must change.
While Moscow has blocked Meta-owned Facebook and Instagram within Russia, its government accounts are still actively spreading pro-Putin, pro-war disinformation to the world. Moscow has activated its global army of bots and trolls to echo and amplify false claims across social-media networks. Former State Department official Ben Scott — who is a board member of Free Press, the media and tech justice advocacy group where one of us is senior counsel — reports that the use of the “Z” hashtag and imagery supporting Russian aggression are still proliferating across social media, despite these platforms’ claims that they’re taking actions to curb them. And The Guardian’s Kari Paul reports that on Facebook 80 percent of the false claims around the “US bioweapons conspiracy theory” have gone unflagged.
Organizations including Free Press and the Real Facebook Oversight Board — a group of activists holding the company accountable in ways that Facebook’s official oversight board does not — have routinely pressured online platforms to do more to combat hate speech and disinformation.
Today, we’re calling on Meta to take three decisive steps to stop amplifying the worst Russian propaganda. These are measures that should also apply to other disinformation campaigns worldwide, including targeted efforts to misinform voters participating in the dozens of national elections that occur in any given year.
First, Meta needs to fix algorithms that promote the most incendiary and hateful content, including disinformation that makes a false case for going to war, covers up or distorts evidence of war crimes, and dehumanizes the innocent victims of combatants. Meta’s business model is built on its ability to increase users’ engagement with its platforms. But keeping eyeballs glued to Russian disinformation comes at a great cost to free societies. Meta needs to stop amplifying hate and lies for profit.
Second, Meta must protect its users equally. Haugen’s testimony made clear that the company prioritizes content moderation in English but is woefully understaffed when it comes to vetting disinformation in the world’s other languages. These include Russian and 25 other languages that are collectively spoken by more than half of the world’s population. Nobel Prize laureate (and Real Facebook Oversight Board member) Maria Ressa, who is a frequent target of Facebook attacks in the Philippines, has spoken frequently about this gaping double standard. Meta must devote more resources to content moderation outside the United States and Western Europe.
Third, Meta must be transparent about its amplification and moderation practices. That means making it possible for researchers and journalists to investigate whether the company is adhering to its stated commitments to combat misinformation, protect elections, and “keep people safe.” In the past, when researchers began to touch on any of these sensitive topics, Meta summarily cut off their access or attempted to bury the results of their research.
These break-glass measures should continue even after Meta decides that a crisis has passed. We are in a global disinformation crisis. With Russian aggression escalating and 36 determinative national elections taking place in 2022, the urgency to act has never been stronger. The spread of disinformation never ends. Meta’s efforts to safeguard its users shouldn’t either.
Nora Benavidez is the senior counsel and director of digital justice and civil rights at Free Press. Kate Coyer is a fellow with the Democracy Institute’s Center for Media, Data and Society at Central European University and a member of the Real Facebook Oversight Board.