The evolving, uncertain circumstances of the COVID-19 pandemic created the conditions for what the World Health Organization (WHO) has termed an ‘infodemic’: ‘an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it’. The imposition of lockdowns in many countries, together with physical distancing measures, pushed a growing number of citizens online, with more than 40 per cent of respondents to a Global Web Index survey conducted in May 2020 stating that they were spending more time on social media because of the pandemic. But the world found itself scrambling for answers in an information environment where professional gatekeepers were replaced by algorithmically driven and opaque infrastructures, where the trust deficit between citizens and political leaders had widened, and where the increasingly polarized public discourse was less conducive to nuanced, complex debates on matters such as what effective pandemic management looks like.
Digitally scaled misinformation and disinformation were already a challenge of global proportions when the pandemic hit, and contested knowledge – such as that relating to an unknown virus – was more susceptible to abuse by malicious actors who could easily exploit the pervading ambiguity. The pandemic exacerbated certain pre-existing trends, but also brought into sharp focus changing geopolitical dynamics.
COVID-19 disinformation is not just a public health issue; it is also a security issue. The increasing weaponization of disinformation and control of the information space by influential political figures have demonstrated how democracies’ media environments have grown more susceptible to cyber influence operations (CIOs) than have the closed ecosystems of non- or weak democracies. Tech companies have also developed to become geopolitical actors in their own right, advancing their interests by lobbying governments or using their market power to shape public opinion. On the other hand, the diminishing power of a press affected – like so many sectors – by the pandemic is alarming, precisely because it is one of the few actors holding not just governments but also big tech to account.
COVID-19 trends and security concerns
Cracking down on press freedom
In the aftermath of CIOs across the world, various governments moved to introduce anti-disinformation legislation, raising concerns over its potential use for clamping down on press freedom and opposition voices, both key components of functional democracies. Unfortunately, the COVID-19 pandemic and the state of exception it introduced lent credence to these concerns, with emergency laws and new regulations being used to suppress freedom of expression and criticism of governments’ handling of the crisis.
The UN High Commissioner for Human Rights, Michelle Bachelet, has criticized Bangladesh, Cambodia, China, India, Indonesia, Malaysia, Myanmar, Nepal, the Philippines, Sri Lanka, Thailand and Vietnam for the use of emergency and anti-disinformation legislation to clamp down on freedom of expression or stifle criticism of the states’ COVID-19 response. The Council of Europe has also criticized this worrying trend in Asia. Iran, Turkey and Hungary have been showcasing similar trends, while in Russia police arrested protesting journalists, effectively using public health restrictions to impinge on freedom of assembly.
States should refrain from clamping down on opposition forces, but it is worth bearing in mind that it was tech companies’ inability to contain disinformation that led to the increasing securitization of the digital media space, used in turn as a pretext for the political suppression we are witnessing.
Wielding geopolitical power via CIOs
As a truly global common denominator, the COVID-19 pandemic led to the globalization of CIOs of which the apparent aim has been predominantly to undermine adversaries by misrepresenting their handling of the crisis, promoting authoritarian solutionism and deflecting responsibility. Both China and Russia have deployed COVID-19-related CIOs that seem to attempt to undermine trust in the effectiveness of EU institutions, improving their own image and sowing confusion about the virus’s origin.
CIOs do not even have to be based on false claims. An investigation by the Australian Strategic Policy Institute (ASPI) uncovered coordinated inauthentic activity by Chinese-speaking but unidentified actors that sought to skew the public debate by the automated amplification of authentic content that criticized the US response to the pandemic.
Equally worrying are authoritarian-driven CIOs exploiting the COVID-19 crisis with the apparent aim of undermining support for democracies and geopolitical competitors. According to the Oxford Internet Institute (OII), social media distribution networks operated by state-backed outlets in China, Russia, Iran and Turkey, and generating millions of engagements, sought to portray democracies as incompetent in their response to the pandemic and, as a counterpoint, to show authoritarian regimes as successful. Some states focused on regional geopolitical rivals, with Saudi Arabia, for instance, leveraging social media posts by its state media to criticize Qatar, Iran and Turkey.
There was partial alignment in terms of narratives between Iran, Russia and China. The Russian state television network RT’s English-language social media accounts portrayed a positive image of both Russia’s and China’s reactions to the pandemic, and the Iranian influence group International Union of Virtual Media (IUVM) took a pro-China line, attacking Western media for their coverage of the crisis. The head of Iran’s Islamic Revolutionary Guard Corps also engaged in conspiracy theorizing, claiming in March 2020 that COVID-19 might be a result of a US biological attack – a narrative widely circulated in China.
In the absence of meaningful deterrence, and with an information space that is not just polluted but also open to abuse, state and non-state actors are likely to feel compelled to deploy their own CIOs to counteract adversaries. CIOs enable countries to counterbalance hard power and economic asymmetries, while the plausible deniability of the opaque information space diminishes the risks of escalation and sanctions.
The value of obscuring attribution was also evident in Russia’s deployment of COVID-19 propaganda, often spreading its messages through websites and ‘inauthentic personas’ – i.e. multi- or single-use fake accounts impersonating journalists and contributing op-eds and articles. In the current context, when the economic implications of COVID-19 are threatening the survival of established press outlets, the rise of websites purporting to be news outlets but that in reality act as vectors for disinformation is quite alarming. Technological developments such as OpenAI’s GPT-3 are also likely to raise the threat level.
Conspiracy theories go mainstream
A trend of particular concern that has implications for public health and the containment of COVID-19 is the proliferation of conspiracy theories and their move from the fringes to the mainstream of popular discourse. The digital platforms’ combination of relativization – the flattening of communicative hierarchies, with all sources appearing equivalent – and the algorithmic boosting of sensational and emotive content that raises online engagement metrics, the dwindling trust in institutional authority – precipitated by a series of mismanaged crises – and the dismembering of the news media landscape have all created fertile ground for conspiracy theories to take hold, with those being more likely to trust what they see through their own ‘research’ and unverified sources via their social media than to trust professional journalists.
A trend of particular concern that has implications for public health and the containment of COVID-19 is the proliferation of conspiracy theories and their move from the fringes to the mainstream of popular discourse.
A variety of conspiracy theories and clusters have come to the fore to provide simplistic – albeit totally unfounded – narratives that, in the context of an often incoherent or complex pandemic response, have proven compelling to growing numbers of people. Online communities active before the pandemic, such as ‘anti-vaxxers’ and QAnon – a group identified as a terrorist threat by the FBI but latterly embraced by Donald Trump in the latter part of his presidency – have coalesced to disseminate content that threatens public safety. QAnon, the conspiracy theory network propagating the notion of a ‘deep state’ plotting to torpedo Trump’s political career, has been expanding both geographically and politically, and has been cited as contributing to the violent storming of the US Capitol in January 2021.
Conspiracy theories linking 5G technology to the coronavirus have spread all the way from Europe to Latin America, and have led to various acts of violence. Another strand of theories attacks George Soros and Bill Gates as symbols of the most co-opted term in populist rhetoric, the ‘elite’, claiming they seek population control via the spread of COVID-19 or are hiding a cure. Such conspiracy theories portraying ‘elites’ as the sole culprits of the crisis or, even worse, theories targeting ‘othered’ scapegoated populations and minorities have appeared in the past, but in the era of social media the speed of their dissemination so far exceeds the capacity to counteract or contain them. Right-wing groups have extensively used COVID-19 conspiracy theories and disinformation to influence public opinion on policy issues or to target minorities.
Conspiracy theories can be heavily politicized, providing ammunition for rising and empowered populist leaders and for radical movements that thrive on division. Conspiratorial narratives have been disseminated by the Italian and the French right wings to foment racism and anti-immigration sentiment, and in India COVID-19 disinformation has been weaponized to target Muslim groups.
By injecting into popular debate the fantasy that obscure forces are working against the public interest, conspiracy theorists have been able to make the wearing of face masks and lockdown measures a ‘wedge issue’ – a position cutting across party lines and framed in zero-sum terms whereby one side is wholly right and the other wholly wrong, with no space for the concessions. This approach is obviously dangerous for a complex social, economic and health issue such as a pandemic. In attempting to shore up their position against rational evidence, COVID-19 conspiracy theorists seek to undermine the credibility of authorities and officials. In the US, the extreme-right ‘Boogaloo’ network has used anti-establishment false narratives to animate and recruit disenfranchised Americans, leading in some instances to real harm.
Last but not least, conspiracy theories and the environment of pervasive ambiguity they create also provide an enabling environment for CIOs by state actors. In March 2020, for instance, China’s foreign ministry spokesman Zhao Lijian lent credence to a conspiracy theory suggesting that COVID-19 was brought to Wuhan by the US army.
Domestic information control
The exertion of influence by means of communication strategies has as much to do with the information hierarchies that are presented as with the facts that are deliberately omitted. State actors have attempted to report information selectively, in what could be an effort to avoid public anger. China and Saudi Arabia, for example, deployed social media to boost reporting of COVID-19 recovery rates rather than transmissions.
UNESCO has highlighted that public access to information is a fundamental right that becomes even more important during a health emergency. Even though neither China nor Saudi Arabia are signatories to the Aarhus Convention, it is worth considering whether selective representation of facts would contravene a ratifying party’s obligation to provide the public with the necessary information to prevent or mitigate harm during a health crisis.
In Brazil – also a non-signatory – official government channels have been used to disseminate messages that contravene WHO recommendations, with President Jair Bolsonaro promoting false information about COVID-19 cures and effects while staunchly opposing lockdowns that could aggravate the economic recession that hit the country. Bolsonaro was not the only political leader who attempted to avoid difficult measures that could dent economic recovery and, consequently, voter support. In the US, Donald Trump also repeatedly downplayed the effects of the virus, even when he fell victim to it and was himself hospitalized.
The pandemic struck in an already datafied world, so statistics in terms of recoveries, deaths and new cases became tools for states to broadcast their public health management credentials and, conspicuously, their legitimacy. In some instances, when those numbers were not favourable or for reasons of political expediency, political figures identified a variety of scapegoats, from immigrants to China. In the case of the US, for example, the Trump administration’s constant blaming of China for the COVID-19 pandemic seems to have had some effect on public sentiment. Pew research published in mid-2020 showed that 78 per cent of Americans blamed China’s initial handling of the novel virus for the global outbreak.
Steps taken
Tech companies respond
As the infodemic spread online, tech companies adjusted their policies and launched new features, taking a markedly more decisive approach than was evident in their previous stance towards political disinformation. For example, following widespread outcry at QAnon’s activities, in July 2020 Twitter banned the conspiracy theory group, with Facebook following suit. In the aftermath of the US Capitol riots, online platforms have doubled down on efforts to block QAnon groups.
There now exists a patchwork of policies in terms of downranking, flagging or removing health disinformation content across platforms, developed as government, public and civil society’s exasperation grew. Tech companies have predominantly prioritized elevating authoritative guidance by official public health authorities and WHO, creating information hubs or providing free advertising credits to governments and health bodies. However, an Avaaz investigation of the efficacy of this approach by Facebook has been damning: it found that content from the top 10 websites spreading health misinformation had almost four times as many estimated views as content from organizations such as WHO and, in the US, the Centers for Disease Control and Prevention (CDC). Earlier research by the Institute for Strategic Dialogue (ISD) think-tank also showed engagement with disinformation websites far surpassed interactions with health bodies. Facebook’s moderation policies may be no match for the damage done by its own algorithm.
Researchers at OII also discovered that junk news websites publishing harmful content in relation to COVID-19 deployed targeted search engine optimization (SEO) strategies to achieve high ranking in search results.
Equally uncertain is the actual implementation of existing policies. In the Facebook sample investigated by Avaaz in its study, only 16 per cent of misinformation had a warning label. Earlier research showed that a substantial proportion of misinformation on Twitter, YouTube and Facebook lacked any flags even after it was debunked by fact-checkers.
Even where flags identifying content as false do exist, their effectiveness remains open to question. In the context of information on COVID-19, a Cornell University study found that the use of enhanced corrections – providing contextual information, for example – is more effective in countering misperceptions. Contextual information also decreased the propensity of interviewees to share false information, but a substantial portion (40 per cent) continued to believe false stories despite the existence of contextual information.
Despite social media companies’ efforts to date, it is clear that problems such as disinformation going viral persist, and are unlikely to go away unless the platforms radically change their business model – a move that will hurt their bottom line and therefore one that they will have every incentive to avoid. CIOs tend to take advantage of platforms’ business models as well as the opacity for which they allow in terms of actors, propagation patterns and differentiated messages. Others have suggested that social media’s reinforcement of individual mental models via user profiling and algorithmic personalized recommendations may impact the public’s situational awareness at a time where a common-ground truth would assist the group decision-making necessary to overcome the crisis.
The World Health Organization mobilizes stakeholders
At the international level, WHO has launched the Information Network for Epidemics (EPI-WIN) initiative to provide the public with timely, accurate information on COVID-19, to convene interdisciplinary meetings with key stakeholders, and to put forward a framework for managing infodemics. WHO’s approach to tackling disinformation and the infodemic is to understand the dynamics and propagation patterns, who is targeted, and what the impact is, rather than just focusing on individual pieces of false information. It is deploying social listening, and investing resources into developing high-quality and easily accessible health information, an intervention toolkit, countering disinformation, monitoring impact and promoting greater digital literacy with the aim of reducing public susceptibility to misinformation. It is also working with UN Global Pulse to deploy speech-to-text technology in order to analyse the feedback of offline communities, and with UNESCO to help community radio stations promote reliable health information across the world. WHO has also launched the Africa Infodemic Response Alliance (AIRA) to help coordinate actions against COVID-19 disinformation in Africa.
WHO’s approach to tackling disinformation and the infodemic is to understand the dynamics and propagation patterns, who is targeted, and what the impact is, rather than just focusing on individual pieces of false information.
In addition, WHO is also working with UNICEF and the International Federation of Red Cross and Red Crescent Societies (IFRC) on ground-level community engagement in regions with weak digital media penetration. Vulnerable groups such as refugees and internally displaced persons can be exposed to digital disinformation and may be already in extremely compromised positions because of increased prices in food or personal hygiene items, or because of difficulty in physical distancing.
International cooperation and coordination are central to WHO’s approach, and it is uniquely placed to bring different stakeholders together, learn from best practices and take an iterative approach to policymaking. The stakes could not be higher as the effectiveness of multilateralism in supporting cooperation, coordination and synergies is put to the test by the convergence of coronavirus, its devastating economic implications and the rise of authoritarianism and protectionism.
The EU integrates COVID-19 into its long-term fight against disinformation
The EU is taking a multi-pronged approach to the issue of the infodemic, and the European Regulators Group for Audiovisual Media Services (ERGA) is tasked with assessing the effectiveness of platforms’ response to COVID-19 disinformation. The signing of the EU’s Code of Practice by a group of tech companies in October 2018 was a useful first step in providing private actors with the opportunity to contain disinformation, but it has subsequently been criticized by EU member states, ERGA and the Commission itself, for its voluntary nature and the lack of sanctions, redress mechanisms and independent compliance verification. Still, an enhanced group of signatories has begun submitting monthly reports on their COVID-19-specific policy changes.
The office of the High Representative of the Union for Foreign Affairs and Security Policy also conceded in June 2020 that EU public policy could benefit from a faster and more coordinated response, calling for the security dimension of disinformation in general to be reflected in the forthcoming Security Union Strategy (for the period 2020–25). COVID-19 disinformation has also affected the Commission’s thinking in terms of the European Democracy Action Plan and the proposed Digital Services Act (DSA).
National level responses
At national level, some countries have addressed COVID-19 disinformation through dedicated crisis units and enhanced digital health communication; others, such as Portugal, have taken steps to boost the communication capacities of health ministries. Other measures national governments have put in place include public information and digital literacy campaigns, as well as dedicated instant messaging channels.
Next steps
The infection patterns of coronavirus are in some ways illustrative of why the study of propagation patterns and dynamics – rather than of individual cases of false information – is more important in tackling disinformation. Fact-checking is certainly a crucial part of the puzzle, but it lacks the necessary system-level view that can have network effects.
Strategic thinking in policymaking is key to tackling disinformation, so obstacles and constraints to policy implementation should be considered in advance, and methods to circumvent or counteract them should be planned accordingly. Establishing benchmarks for successful policy monitoring, implementation and impact assessment is also paramount. Multidisciplinary cooperation is necessary so that important trade-offs – such as that between freedom of expression and public health – can be negotiated meaningfully. The issue of an appropriate division of labour in content creation, moderation and dissemination that sustains a democratic public sphere also needs to be addressed.
Conflict of interest considerations should limit the role of tech companies in dictating the solutions to the problems they themselves helped create, in the same way that the tobacco industry should not be asked to draft health regulations, or oil companies to devise environmental standards. Even though ensuring the buy-in of tech companies is necessary, strategic thinking in terms of the scope and form of their engagement is necessary for other actors such as civil society to not be sidelined in terms of framing, analysing and addressing the issue at hand. Political leadership, by parliamentarians, international organizations and governments, is absolutely key for tackling COVID-19 disinformation and the infodemic.
A whole-of-society approach should remain central to the efforts, as COVID-19 disinformation flows not only in a top-down direction (for example, from politicians or celebrities), but also from the bottom up. Politicians and leading figures must take responsibility for the messages they disseminate. The question of whether the amplification of COVID-19 disinformation by state officials and political leaders effectively ‘violates the right to health’ merits urgent attention.
UN agencies and regional organizations such as the EU, committed to a rules-based order and democratic values, should enhance their collaboration to set a clear path forward. There are already efforts under way aimed at fostering closer collaboration between UN agencies including WHO, UNICEF and the International Telecommunication Union (ITU), and their work on how health disinformation spreads and how individuals interact with it should inform the work of the EU in terms of the European Democracy Action Plan and the forthcoming DSA. In December 2020, anticipating the incoming US administration under the presidency of Joe Biden, the high representative for EU external action, Josep Borrell, notably highlighted the need for a transantlantic rapprochement and cooperation on issues of COVID-19 and technology, among other areas of joint concern. National authorities should also lead domestic counter-disinformation efforts by drawing on the expertise and the ongoing cooperation of WHO.
Four critical considerations
The four steps of emergency management remain crucial in tackling COVID-19 disinformation:
- Mitigation: Legislation, regulation, re-establishing competition in the digital media environment, the introduction of digital and media literacy programmes, lobbying reform, and enhancing technical and tech policy expertise within ministries.
- Preparedness: Monitoring new media market entrants and changing dynamics, establishing protocols of cooperation between tech, media actors and governments, investing in strategic foresight, and alliance building.
- Response: Ensuring organizational structures enable effective communication within government and between authorities and the public, monitoring and evaluating policy implementation, strategic communication, and infodemic management.
- Recovery: Impact assessments of measures taken, consideration of sanctions for culpable agents, and notification systems for targets of disinformation.