Disinformation is a high-stake game threatening freedom

In the second of a series of interviews with the Queen Elizabeth II Academy Faculty, Jessica Cecil examines solutions to disinformation eroding trust in democratic leadership.

Interview Published 12 July 2022 5 minute READ

You have had a long and distinguished career with the BBC. Your most recent leadership role at the BBC was setting up and directing its Trusted News Initiative. We’ve seen the pernicious effects of disinformation. Getting credible information into the public sphere is essential for citizens and also for elected officials if they are to make good decisions. As a leader in this field, what is your perspective on the debate about disinformation? What is at stake?

Quite simply, if citizens are making decisions based on disinformation – that is false information deliberately spread to mislead them – there can be harmful real-world consequences. Democracy can be undermined, and disinformation can cost lives. Across the world, people have been making decisions about their health based on false information – decisions on whether to have a vaccine, decisions to seek out fake cures. We do not yet know how many hundreds of thousands of deaths have been caused in this pandemic because of disinformation.

Hate speech and false claims against the Rohingya people in Myanmar were spread on Facebook in 2017 and were the backdrop to the communal violence in which thousands of Rohingya people were killed. And we have an increasing issue around climate change disinformation. Put simply, it is impossible to seek agreement in societies in which there are diverging views if you cannot even agree on the facts.

But many politicians have concluded at times that dabbling in disinformation is in their short-term political interests, including President Macron of France who described the Astra-Zeneca jab as ‘quasi-ineffective’ for older people just as the European Union (EU) approved it for all adults.

It was a remark made when the French and British leaders were at odds over Brexit but the result was real-world harm. Two months later the Astra-Zeneca vaccine was approved for all adults in France, but shunned by many people there – slowing down France’s vaccine rollout. Dabbling with disinformation may seem to have short-term benefits, but in the long-term it is polluting the water table for fair information and that is dangerous for lives and for society.

Does disinformation present a different kind of problem when it is leaders who are trumpeting fake news or fake facts, especially leaders of democratic states?

The source of the information matters. In Europe and North America there is already widespread mistrust of elites so, if people believe a claim from a democratic leader and then find it to be false, that cynicism is likely to be reinforced. That means people are less likely to trust those leaders next time on important issues such as following public health advice. And, as happened in France, disinformation means accurate science is ignored and lives were almost certainly lost.

Put simply, it is impossible to seek agreement in societies in which there are diverging views if you cannot even agree on the facts

And disinformation is amplified though social media. Never before in history has an individual been able to make their views known across the world instantly. And there is virality – disinformation is powerful and spreads when it hits home on an emotional level. Fighting emotionally satisfying disinformation with rationality is extremely difficult. Disinformation gets amplified by people who have an interest in making sure that it is spread further.

What is your view on content moderation and why is the world still so far away from a solution?

When you talk about content moderation, you come up against the live rail which is freedom of speech. Even in the US there are accepted limitations on some speech, for instance through libel laws. However, in the new world of online information, we have not yet figured out what the rules of that content moderation should look like. You also have so-called Section 230 which absolves online platforms of the direct responsibility for the content they carry, a responsibility which newspapers and traditional publishers do have. But it is increasingly clear there are important real-world harms caused by the most dangerous disinformation and that demands some form of content moderation.

To be clear, this is about addressing the disinformation that has the most real-world harm – it is not about policing the internet. But the question is, how? By its very nature, regulation affects specific countries whereas disinformation is a transnational issue. And often governments have ‘skin in the game’ and, under the cover of fighting disinformation, many countries pass restrictive legislation which undermines freedom of speech.

Content moderation is important, but we should not expect regulation to do all the heavy lifting. I believe there is more of a role for self-moderation through alliances alongside regulation.

One issue you touch on is self-regulation. You have just outlined one of the challenges governments come up against when they try to regulate the information space. They immediately become susceptible to the critique that their efforts to control the information space are politically motivated. Does this mean that platforms should design their own regulations?

No, you can’t leave the platforms alone to define content moderation. I am a believer in self-regulation through a much bigger conversation between the platforms, civil society, and journalistic organizations. Looking at two proofs of concept, one is the Trusted News Initiative – which I led at the BBC and which has a handful of news organizations and platforms.

They work together to produce solutions to some of these issues, such as defining what the most dangerous forms of disinformation are and putting in place a fast-alert system to tackle it together when it occurs.

Another is the Meta oversight board, attempting to work out where free speech ends and where real-world harm begins, and looking at that in different countries.

There seems to be a coordination issue. Is it a problem if you have very different standards across platforms when it comes to moderation?

Yes, because I think it is necessary to have clear and transparent standards, shared as much as possible across all platforms. It is much easier to argue from first principles if you share information and have a transparent process for saying ‘this is the way we have decided things’. And I think you do that by bringing in many organizations in civil society and journalism across the world with the platforms. These organizations would share commitments, to being independent of governments, to fact-checking, and to fighting disinformation.

This cooperation is one leg of a structured approach to disinformation. You are only ever dealing with what is most dangerous and has a direct relationship with real-life harm. The ‘Council of Despair’ that says ‘because there is such a problem and so much mistrust, you cannot do anything’ means we cannot progress beyond where we are now, where there isn’t anything like enough collective action.

Another way to tackle this is through a bottom-up approach, for example to educate citizens. What is your view on this?

That is very important. If you look at the disinformation, there are four main bricks in this particular wall. We have talked about regulation, we have talked about self-regulation, a third one which I think is very interesting is a technological solution. Microsoft is currently working with The New York Times, the BBC, and others to put together a technology which can identify the provenance of any piece of content, enabling us to identify which content is genuine and which has been manipulated, altered, and is in reality a ‘deep fake’ even if it looks like it comes from a trusted source.

But I think the fourth aspect which is absolutely crucial is media education – so that individuals have a critical understanding of what is and what is not likely to be true and they are taught this at school. I am not arguing that any of these are magic bullets, but you must look at all four of them to say ‘these are the building blocks for trying to create a situation where we have a more structured approach to the most dangerous disinformation’.

The war in Ukraine has changed many things, it brought the West closer and has also been a reminder that democracy and freedom are at risk, not only in Ukraine. But as you think about the issue of disinformation and content moderation, does the war in Ukraine change anything?

Absolutely. Firstly, as if we needed reminding, the Russians are absolutely set on seeing the information war as one of the ways they can project the power they have, an extension of their military power. In Ukraine the Russians have attempted to say the bombing of the maternity hospital in Mariupol was staged by ‘crisis actors’ or that Ukrainian troops with blood on them were using stage paint.

Dabbling with disinformation may seem to have short-term benefits, but in the long-term it is polluting the water table for fair information and that is dangerous for lives and for society

This approach by the Russians is something we have seen before. But we have seen the beginning of questioning by people within Russia of some state narratives. If you look at the attempt in the sinking of the Moskva to claim initially that no sailors died, we’ve seen relatives, mothers, within Russia saying ‘we’re not buying this’.

Secondly, you have got a very interesting situation in the West – particularly the US and the UK – knowing how the Russians have used information as an extension of their ability to wage war and have begun so-called ‘pre-bunking’ – you know the Russians are going to try and define a particular narrative, so you shape the narrative first.

There is this extraordinary situation where every day the UK Ministry of Defence releases intelligence on Twitter outlining what it thinks is going on. Even a matter of weeks ago, this would have been classified. This is a genuinely radical attempt to get in front of disinformation.

It is also important to see the way the civil society in Ukraine has stepped up to try and help the population to understand how to distinguish disinformation from fair information. Outside Ukraine, you also have a fantastic coalition of journalistic organizations, which traditionally have been competitive, working together to try and establish facts in the face of Russian disinformation. Some positive things are happening, as well as some worrying ones.

Disinformation is a high-stake game threatening freedom 2nd part

As you look ahead, what are the two or three things that will matter most in shaping the success of the many initiatives in this space? Or do you imagine an inevitability to this, that we are so far down the path of disinformation and fake news that it is going to be hard to reverse.

I am an optimist and believe good things could come out of that crucible of disinformation which is the war in Ukraine and out of the effects of disinformation on the US democratic process that we have seen culminating in the 6 January hearings. I think we will think much more strategically about what more we can do collectively, because we are aware of how dangerous unchecked disinformation is.

I would like to see the tech platforms in particular come together with civil society and journalistic organizations because they cannot solve this on their own. We will also see democratic politicians understanding the dangers of flirting with disinformation and will pull back from that.

I am hoping those are the two big things that will happen over the next 12-24 months, because the stakes are so high, and institutions and individuals really need to step up and work out what their role is in solving this.