The Russian government’s formidable information apparatus has opened the spigot on what has been called a firehose of falsehoods to maintain Russian citizens’ support for Vladimir Putin’s invasion of Ukraine. But self-appointed misinformation warriors around the world are taking to the Internet to fight back. Collectively, they are countering propaganda, debunking deep fakes, and exposing the brutal realities of Russia’s invasion. The latest to enter the fray is itarmy.com.
Created by Ukraine-supporting hacktivists, the website provides instructions and tools to help citizens anywhere in the world take down misinformation-peddling Russian websites, from government pages to TV streaming services.
There are many projects like this. Since the start of the Russian invasion, the IT Army of Ukraine, other groups including Anonymous, and individual hacktivists have been launching wave after wave of cyberattacks against Russia. Hackers have broken into the live feeds of several Russian television stations, interrupting Russian state propaganda coverage to televise independent broadcasters’ actual war footage. In March, a Norwegian computer expert created a website that enables users to send millions of emails with truthful information about the war to Russian email accounts. And in April, hackers leaked massive amounts of data stolen from Russian companies and government agencies, including, it is alleged, personal information on Russian intelligence officers.
Such efforts are breaking through Russia’s information firewall, proving that in a globally interconnected age, no propaganda campaign is unassailable, no information barrier is impregnable, and a balkanized Internet is not possible.
These realizations are sure to cause apprehension within China’s government, which is attempting to completely control its domestic information environment.
Indeed, the so-called Great Firewall of China may not be impenetrable, as it was once believed to be. In the early stages of the COVID-19 pandemic, Chinese officials were surprised by the online outrage triggered by the death of Dr. Li Wenliang, who had taken to the Internet to sound the alarm about the virus in spite of state-backed censorship efforts. And during Shanghai’s extended and draconian lockdown, civilians there found ways to bypass Chinese censors in order to expose their living conditions online.
If no firewall is unassailable and if state censors or state-sponsored media can be undermined by citizen hackers, are we destined to have endless battles over the spread of dangerous falsehoods? What if there were a global appetite to sustain a healthy information environment — rather than to compete to degrade it?
We are not so naive as to think that we can eliminate the scourge of misinformation. But we have ideas about how to stem its global circulation so that we can restore truth and transparency around critical threats to our health and well-being.
That is why, at the Nobel Prize Summit in 2021, we proposed the creation of a neutral, independent scientific body that would foster global cooperation to safeguard the online information environment. We call this group the International Panel for the Information Environment (IPIE).
Modeled after the Intergovernmental Panel on Climate Change (IPCC), the IPIE would convene leading scientists and researchers from around the world — including Russia and China. After all, as with negotiations about climate, how can we effect real change unless the worst offenders are at the table?
The IPIE would assess the scope of the misinformation crisis, analyze its effects on our societies and the planet itself, and propose solutions. Drawing from a broad range of disciplines, from data scientists and engineers to neuroscientists and sociologists, this body would be the beginning of a global effort to save our common information environment. Some of our research colleagues are already calling for such an evidence-based approach to misinformation.
We propose starting with areas of global concern, such as climate change, public health, humanitarian assistance, and economic development.
Our vision: The IPIE would analyze systems of information manipulation and bias, assess the condition of the global information environment, and evaluate the best policy solutions for addressing threats to that environment. Algorithmic bias, manipulation, and misinformation have become existential threats that exacerbate social problems, degrade public life, cripple humanitarian initiatives, and prevent progress on other threats.
A global scientific initiative to empirically monitor, verify, and inspect the actions of technology firms and government agencies would benefit public life around the world.
We know it will be challenging to establish. How can the world cooperate against misinformation? Well, consider the IPCC. Scientists from all over the world, including the United States, China, and Russia, led various working groups and committees to evaluate research and set global climate goals. The panel surfaced and summarized clear points of consensus about climate change. It took decades to do that work, and the IPCC certainly had its growing pains. But its credible outputs, including its powerful Fifth Assessment, led to the Paris Climate Agreement. This milestone could not have been achieved without involving some of the worst government offenders and all types of regimes.
Again, even researchers from closed regimes must be included in a global effort to manage the effects of misinformation. By definition, existential risks to the planet threaten us all, and their costs will be borne by democracies and authoritarian regimes alike. Indeed, we are still living through a global health crisis exacerbated by health misinformation.
Today’s shared — but poisoned — information environment is unprecedented in human history. It is a virtual gathering place where terrorists, racists, human traffickers, and other malignant forces can thrive while vital truths languish. Misinformation prevents international peace and causes new conflicts, blocks progress on almost every serious issue, has negative economic consequences, and is costing lives. Only international cooperation can combat a threat this large.
Sheldon Himelfarb is the founding CEO of the PeaceTech Lab, which grew out of the US Institute of Peace. Phil Howard is director of Oxford University’s Programme on Democracy and Technology.