Persuasion or manipulation? Limiting campaigning online

To tackle online disinformation and manipulation effectively, regulators must clarify the dividing line between legitimate and illegitimate campaign practices.

Expert comment
Published 15 February 2021 Updated 21 April 2021 3 minute READ

Kate Jones

Associate Fellow, International Law Programme

Democracy is at risk, not only from disinformation but from systemic manipulation of public debate online. Evidence shows social media drives control of narratives, polarization, and division on issues of politics and identity. We are now seeing regulators turn their attention to protecting democracy from disinformation and manipulation. But how should they distinguish between legitimate and illegitimate online information practices, between persuasive and manipulative campaigning?

Unregulated, the tactics of disinformation and manipulation have spread far and wide. They are no longer the preserve merely of disaffected individuals, hostile international actors, and authoritarian regimes. Facebook’s periodic reporting on coordinated inauthentic behaviour and Twitter’s on foreign information operations reveal that militaries, governments, and political campaigners in a wide range of countries, including parts of Europe and America, have engaged in manipulative or deceptive information campaigns.

For example, in September 2019, Twitter removed 259 accounts it says were ‘falsely boosting’ public sentiment online that it found to be operated by Spain’s conservative and Christian-democratic political party Partido Popular. In October 2020, Facebook removed accounts with around 400,000 followers linked to Rally Forge, a US marketing firm which Facebook claims was working on behalf of right-wing organisations Turning Point USA and Inclusive Conservation Group. And in December 2020, Facebook took down a network of accounts with more than 6,000 followers, targeting audiences in Francophone Africa and focusing on France’s policies there, finding it linked with individuals associated with the French military.

Public influence on a global scale

Even more revealingly, in its 2020 Global Inventory of Organized Social Media Manipulation, the Oxford Internet Institute (OII) found that in 81 countries, government agencies and/or political parties are using ‘computational propaganda’ in social media to shape public attitudes.

These 81 countries span the world and include not only authoritarian and less democratic regimes but also developed democracies such as many EU member states. OII found that countries with the largest capacity for computational propaganda – which include the UK, US, and Australia – have permanent teams devoted to shaping the online space overseas and at home.

OII categorizes computational propaganda as four types of communication strategy – the creation of disinformation or manipulated content such as doctored images and videos; the use of personal data to target specific segments of the population with disinformation or other false narratives; trolling, doxing or online harassment of political opponents, activists or journalists; and mass-reporting of content or accounts posted or run by opponents as part of gaming the platforms’ automated flagging, demotion, and take-down systems.

Doubtless some of the governments included within OII’s statistics argue their behaviour is legitimate and appropriate, either to disseminate information important to the public interest or to wrestle control of the narrative away from hostile actors. Similarly, no doubt some political campaigners removed by the platforms for alleged engagement in ‘inauthentic behaviour’ or ‘manipulation’ would defend the legitimacy of their conduct.

The fact is that clear limits of acceptable propaganda and information influence operations online do not exist. Platforms still share little information overall about what information operations they see being conducted online. Applicable legal principles such as international human rights law have not yet crystallised into clear rules. As information operations are rarely exposed to public view – with notable exceptions such as the Cambridge Analytica scandal – there is relatively little constraint in media and public scrutiny or censure.

OII’s annual reports and the platforms’ periodic reports demonstrate a continual expansion of deceptive and manipulative practices since 2016, and increasing involvement of private commercial companies in their deployment. Given the power of political influence as a driver, this absence of clear limits may result in ever more sophisticated techniques being deployed in the search for maximal influence.

Ambiguity over reasonable limits on manipulation plays into the hands of governments which regulate ostensibly in the name of combating disinformation, but actually in the interests of maintaining their own control of the narrative and in disregard of the human right to freedom of expression. Following Singapore’s 2019 prohibition of online untruths, 17 governments ranging from Bolivia to Vietnam to Hungary passed regulations during 2020 criminalising ‘fake news’ on COVID-19 while many other governments are alleged to censor opposition arguments or criticisms of official state narratives.

Clear limits are needed. Facebook itself has been calling for societal discussion about the limits of acceptable online behaviour for some time and has issued recommendations of its own.

The European Democracy Action Plan: Aiming to protect pluralism and vigour in democracy

The European Democracy Action Plan (EDAP), which complements the European Commission’s Digital Services Act and Digital Markets Act proposals, is a welcome step. It is ground-breaking in its efforts to protect the pluralism and vigour of European democracies by tackling all forms of online manipulation, while respecting human rights.

While the EDAP tackles disinformation, it also condemns two categories of online manipulation – information influence operations which EDAP describes as ‘coordinated efforts by either domestic or foreign actors to influence a target audience using a range of deceptive means’ and foreign interference, described as ‘coercive and deceptive efforts to disrupt the free formation and expression of individuals’ political will by a foreign state actor or its agents’. These categories include influence operations such as harnessing fake accounts or gaming algorithms, and the suppression of independent information sources through censorship or mass reporting.

But the categories are so broad they risk capturing disinformation practices not only of rogue actors, but also of governments and political campaigners both outside and within the EU. The European Commission plans to work towards refined definitions. Its discussions with member states and other stakeholders should start to determine which practices ought to be tackled as manipulative, and which ought to be tolerated as legitimate campaigning or public information practices.

The extent of the EDAP proposals on disinformation demonstrates the EU’s determination to tackle online manipulation. The EDAP calls for improved practical measures building on the Commission’s 2020 acceleration of effort in the face of COVID-19 disinformation. The Commission is considering how best to impose costs on perpetrators of disinformation, such as by disrupting financial incentives or even imposing sanctions for repeated offences.

Beyond the regulatory and risk management framework proposed by the Digital Services Act (DSA), the Commission says it will issue guidance for platforms and other stakeholders to strengthen their measures against disinformation, building on the existing EU Code of Practice on Disinformation and eventually leading to a strengthened Code with more robust monitoring requirements. These are elements of a broader package of measures in the EDAP to preserve democracy in Europe.

Until there are clear limits, manipulative practices will continue to develop and to spread. More actors will resort to them in order not to be outgunned by opponents. It is hoped forthcoming European discussions – involving EU member state governments, the European Parliament, civil society, academia and the online platforms – will begin to shape at least a European and maybe a global consensus on the limits of information influence, publicly condemning unacceptable practices while safeguarding freedom of expression.

Most importantly, following the example of the EDAP, the preservation of democracy and human rights – rather than the promotion of political or commercial interest – should be the lodestar for those discussions.