2. Disinformation in Context
Definition and scope
After gaining notoriety on both sides of the Atlantic, the term ‘fake news’ has gradually been succeeded by the now prevailing ‘disinformation’, but a level of confusion around related terminology persists. Ambiguous definitions make it more difficult to find possible remedies. ‘Fake news’ insinuates that news producers and journalists should be held accountable for the pollution of the information space, and therefore are also implicitly responsible for tackling the problem. While ‘fake news’ scapegoats journalists, ‘information warfare’ alludes to offensive strategies that are often less nuanced and specific. The term ‘foreign influence’, although at times accurate, also runs the risk of cordoning off domestic propaganda purveyors such as political actors or foreign proxies. The scope of foreign influence is also broader than disinformation and according to the US Department of Justice (DoJ), the former may include hacking, malicious cyber activity, identity theft and fraud. Although it is often used interchangeably with these other descriptions, use of the term disinformation enables a more nuanced and holistic analysis of what has become a global problem, by focusing on communication vectors and processes.
Disinformation is defined as ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’.
According to the European Commission’s Action Plan against Disinformation, disinformation is defined as ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’. Harm can entail threats to democratic political and policymaking processes by undermining ‘the trust of citizens in democracy and democratic institutions’. The inclusion of intentionality in the description also differentiates the term from misinformation.
Disinformation can be overt, displaying factually false content but it can also take more subtle forms, such as the cherry-picking of statistics to mislead audiences and prime them in certain ways, or re-contextualized or even tampered visual material. Narratives can be adjusted to take advantage of the existing information space by tapping into divisive issues.
Disinformation’s shape-shifting nature and agility makes it a useful vehicle for hybrid threats or what the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE) defines as a ‘coordinated and synchronised action, that deliberately targets democratic states’ and institutions’ systemic vulnerabilities, through a wide range of means [political, economic, military, civil, and information]’. Coordinated and amplified disinformation can crowd-out rational debate and sow confusion and discord, numbing decision-making capacities. Indeed, hybrid threats aim to exploit the target’s vulnerabilities and generate ambiguity to ‘hinder decision-making processes’.
Big data and its Faustian deal
Technological developments such as the ‘datafication’ of different aspects of life, the rise of smart homes and smart cities, the Internet of Things (IoT), accelerating artificial intelligence (AI) development, and internet and mobile phone penetration, have vastly exacerbated the combined ripple effects of disinformation’s complexity and scale. The prevailing data governance ambiguities, a tech sector far removed from public scrutiny and a utopian vision of how humanity and the market would interact with the internet – encapsulated in the famous Declaration of the Independence of Cyberspace by John Perry Barlow – created cracks in the system and enabled privacy encroaching surveillance systems to be developed and refined. As Zuboff has highlighted, there is a need to attend to the anti-democratic implications of allowing the concentration of privacy rights ‘among private and public surveillance actors’, at the very moment those same rights are summarily and habitually removed from citizens resigned to the ‘Faustian deal’ of exchanging the right of privacy for a simulacrum of an effective digital life.
Governments need to act to reverse this trend, which will only exacerbate the problem of disinformation. That is why the answer is not more surveillance of the online space or more debunking initiatives, but a re-appropriation of gatekeeper roles to responsible actors that have been or can be regulated sufficiently to fulfil them.
Some suspicions of Russia’s influence operations in relation to the Syrian war, the downing of flight MH17, the US 2016 national elections, the US midterm elections, and the Novichok attacks in the UK, have been confirmed but, to a large extent, they have mostly been debunked. However, the country remains the main source of disinformation in Europe. Other state actors, such as Iran, China, or North Korea have also employed disinformation, as has been established both by the US and the European Parliament.
Meanwhile, state-level domestic propaganda has also grown in recent years. Alarmingly, research indicates that over 28 state actors around the world have manipulated social media to target domestic as well as foreign audiences. On both sides of the pond domestic actors such as politicians, commentators, or the far-right, have also proved to be purveyors of disinformation, sometimes outperforming foreign actors in terms of reach. A case in point is research indicating that just two misleading claims by UK politicians during the EU referendum campaign were cited in 10.2 times more tweets than Brexit-related posts by Russian trolls.
Disinformation has manifested itself as first and foremost a systemic issue, not solely an agent problem. Agents exploit in-built vulnerabilities of the current digital ecosystem and the regulatory gaps in political environments that are already dislocated or prone to influence.
The objectives and vectors of disinformation vary just as much as the differing agents of influence operations. Armed and civilian non-state actors have both deployed disinformation to serve their ideological or financial goals, with Islamic State of Iraq and Syria (ISIS) and a community of young Macedonians in Veles, respectively, being well-documented examples. These two instances showcase how multifaceted the problem of disinformation is, in terms of different objectives pursued and the dissemination vectors used. While ISIS broadcast its radicalization messages on YouTube, the Macedonian actors took advantage of Google’s AdSense interface. The latter is part of a worldwide ad tech infrastructure that has only recently come under scrutiny as it uses online tracking, data-driven targeting and real-time bidding via ad exchanges to reward attention-grabbing clickbait, which has supported the monetization of ‘fake news’ content. Despite actions taken thus far, disinformation continues to be a profitable business.
The issue has become more complex due to the divergence in the motivations of individuals who receive, share and amplify disinformation. Internet and social media users may willfully or unwittingly share false news in an attempt to signal their identity or values, rather than influence their peers per se.
Disinformation has manifested itself as first and foremost a systemic issue, not solely an agent problem. Agents exploit in-built vulnerabilities of the current digital ecosystem and the regulatory gaps in political environments that are already dislocated or prone to influence. Context is paramount in any response and as Benkler et al. highlighted in their study of US media propaganda, ‘each country’s institutions, media ecosystems, and political culture will interact to influence the relative significance of the internet’s democratizing affordances relative to its authoritarian and nihilistic affordances.’ Any attempt to move towards solving or containing the problem should be grounded on a common set of principles by the cooperating actors and a deep awareness and respect of each system’s distinctive circumstances.