By Annie Lentz,
Max Kampelman Fellow
Popularly and ambiguously dubbed “fake news,” malign efforts to spread false facts often are wrongly lumped together with politicians’ diatribes against negative media coverage. Well-orchestrated disinformation campaigns do exist around the world, using algorithms, social platforms, and advertisements as a means of deceiving the public and undermining democracy.
Due to its proliferation and widespread attention, the definition of so called “fake news” has been lost. Even the meaning of the terms it is defined by are ambiguous. In fact, misinformation and disinformation are not synonymous. Misinformation refers the inadvertent spread of false information, while disinformation refers to the purposeful circulation of deceptive news stories by both state and nonstate actors.
Disinformation plagues the modern world in increasingly sophisticated and pervasive ways largely due to widespread use of social media. Whether it’s shared through Twitter, Facebook, Instagram, or WhatsApp, fake news is easy to share, difficult to identify, and almost impossible to stop.
Easy to Share
The trickle-down effect of counterfeit news campaigns is massive. A single fake story has the potential to reach millions, propagated by bots and trolls and manipulation of social media content algorithms. For example, a heavily edited interview from conservative CRTV portrayed a fictional conversation between one of their hosts and Rep. Alexandria Ocasio-Cortez where the Congresswoman admitted to know nothing of the legislative process. Although CRTV eventually said it was satire, the video was viewed almost 1 million times within 24 hours prior to CRTV’s clarification.
This was not an isolated incident. Thanks to the universality of social media, with Facebook and Twitter having a global presence economically and socially, cultures around the world are all susceptible to manipulation through such platforms.
Following the 2019 European Union elections—second only to India as the largest democratic elections in the world—the European Commission documented “ongoing disinformation campaigns” by Russian sources. Officials went on to demand Facebook, Google, and Twitter “step up their efforts” in combatting fake news; they classified the fight as enduring, saying, “Malign actors constantly change their strategies. We must strive to be ahead of them.”
The influence and impact of Russian disinformation efforts remains unknown and therefore future elections in both the EU and elsewhere remain at risk.
Difficult to Identify
Several aspects of the communication space make disinformation hard to identify. When reading content from a seemingly trustworthy source, even if there is no evidence of professionalism, most naturally consider the information to be trustworthy. However, that is not always the case and those creating and spreading propaganda are well-versed in mimicking reputable sources in structure and design.
Moreover, the more specific the topic and narrow the scope, the easier it is for disinformation to spread as consumers lack the background and context to identify red flags, which are becoming ever harder to detect. According to Politifact, earlier this year a Facebook post about Senate Majority Leader Mitch McConnell, claiming he was trying to take away health care from millions of Americans, went viral. This claim was a mischaracterization of his stance on federal funding for health care and falsified his personal history with the program. Regardless, the false narrative spread to thousands of people who lacked the in-depth background knowledge to recognize the inaccuracy.
Disinformation is not limited to false news stories or phony websites; it also extends to doctored photos and videos, like the CRTV interview previously mentioned. The Washington Post’s guide to fact-checking video makes the point, “Seeing isn’t believing.”
Even high-profile politicians can be fooled by such disinformation. One doctored video appearing to show Nancy Pelosi drunk that was retweeted by President Trump, who shared the false narrative with more than 62.8 million followers.
Even content originating from seemingly trustworthy sources can be deceptive. For example, pro-Brexit campaigns from the UK Independence Party (UKIP) during the EU referendum vote in 2016 told a false story through misleading photos (actually from the border of Slovenia) of thousands of immigrants pouring into the UK. Though the poster and campaign were widely condemned, it is impossible to measure the number of voters that may have been influenced. However, the very existence of such misleading material threatened the democratic integrity of the referendum.
The Russia Problem
While there are many guilty parties—like those who spread doctored stories and videos leading up to India’s elections in April and May of 2019 and incited hatred between Buddhists and Muslims in Sri Lanka and Malaysia on Facebook—the biggest culprit behind the growth of widespread disinformation is the Russian Government. The Kremlin has used sophisticated disinformation campaigns to justify its actions in Crimea and eastern Ukraine, interfere in the 2019 European Union elections, and attempt to influence the 2016 U.S. presidential elections.
However, Kremlin interference isn’t isolated to politics. RT America, cited as a principle meddler in the 2016 presidential elections, aired a campaign of stories about health risks associated with 5G signals, none of which were supported by scientific facts. Such efforts from “the Kremlin’s principal international propaganda outlet” match what experts cite as the Kremlin’s ultimate goal: to amplify voices of dissent, sow public discord, and exacerbate social divides.
Impossible to Stop … or Not?
There is no global police force to defend against disinformation. There are platform-specific efforts, such as Facebook’s regulations for political advertising; grassroots efforts, like Factitious, an online game designed to teach students to identify fake news stories; and coalitions like the one formed by Facebook, Google and Twitter after the March 15 massacre in Christchurch, New Zealand, when the tech giants signed an agreement with world leaders to fight hate speech online.
However, with the amount of disinformation growing every day and no unified or cohesive approach from both the public and private sector to aggressively and actively combat online propaganda, these efforts are akin to putting a Band-Aid on a broken leg.
Any attempts to regulate disinformation are constrained by the right to free speech. If the response is too broad–whether from a corporation like Facebook or a government entity–it quickly challenges the fundamental freedoms afforded to citizens. On the one hand, stopping false facts from spreading and inundating social media benefits democracy and freedom the world round. On the other hand, the people’s right to free speech must be respected. Any meaningful efforts to battle disinformation must carefully balance the protection of the community against the protection of the individual.
In addition, those with the best ability to fight against disinformation—private companies like Facebook and Twitter—have no true legal obligation to do so and may have alternative interests in terms of profit. Until Congress shined a light on this problem, there were no serious efforts on the part of social media platforms to fight against foreign influence. As social platforms and their users maintain the right to freedom of expression, the ability of Congress to require them to undertake any specific efforts is lacking. However, that hasn’t stopped them from trying.
There are other solutions. One is promoting better media literacy among citizens, so they can more easily identify false or misleading information. Another is “sourcing” news stories, so readers know the true origin of a story—a story about a local issue in Kansas may in fact emanate from Russia, for example. The content would still be available, but readers would have a better awareness of potential manipulation by outside actors. To combat the ripple effect of disinformation, media self-regulation to verify sources and stories before publishing them is another effective tool.
The most important and most effective way to confront disinformation is by understanding it. Through events like the 2017 Helsinki Commission hearing on Russian Disinformation, and OSCE Representative on Freedom of the Media Harlem Désir’s efforts to lead the OSCE in combatting disinformation, additional progress can be made.
Disinformation is a disease to which no one is immune; the longer the virus goes untreated, the worse it becomes.