ChatGPT has opened a new front in the fake news wars

Search engines with the latest ‘generative AI’ obscure the sources for their responses. The result is a breeding ground for disinformation, writes Jessica Cecil.

The World Today Updated 22 November 2023 4 minute READ

On leaving the BBC in 2021, I didn’t think we had won the war against ‘fake news’, but at least I believed we were capable of winning key battles. At last, fighting disinformation was being looked at systematically by all sides – the tech platforms, the news providers and governments.

Each was developing their own weapons that could be combined to form an armoury to combat the lies and distortions that have been doing so much damage. Critically, we were all starting to work together, pooling resources in a coalition against the common enemy.

One place that brought this coalition together was the Trusted News Initiative (TNI), which I ran for two years at the BBC. It allied the world’s most trusted news providers – such as the BBC, Washington Post, Agence France-Presse, Canadian Broadcasting Corporation and Reuters – with the main tech platforms, Facebook, Google, Microsoft and Twitter.

With a ChatGPT response, the reader has no idea if the source is the BBC, QAnon or a Russian bot

Key to this informal alliance was a common approach to classifying and then identifying the most harmful forms of disinformation. These fell into two categories.

The first is when fake news directly and immediately threatens democracy and the electoral process. The online disinformation around the January 6, 2021, protests at the US Capitol fell into this category. So did the 2019 Indian election when fake poll results, falsely purporting to come from the BBC, appeared on Indian social media sites.

The second is the most harmful disinformation, when fake news presents an immediate danger to life. As the Covid pandemic infected millions around the world, instructions to drink bleach spread online, claiming it as a cure. Likewise, as conspiracy theories linked Covid to the roll-out of the 5G network, people were urged to attack the telecoms infrastructure vital to emergency services. As a result of the steps we took, misleading posts were identified, members of the TNI alerted and the posts exposed, corrected and removed before causing any damage.

A game changer

But just when we were beginning to feel mildly optimistic, a massive new intervention arrived. In November 2022, OpenAI launched ChatGPT and soon after Microsoft announced plans to link its search engine Bing to this so-called generative AI model. This is a total game changer. For all its huge opportunities – and there are many – generative artificial intelligence threatens to overwhelm many of our existing defences against fake news.

Generative AI enables computers to use large language models, known as LLMs, to mine the data they have to hand and produce new content from it. The consequences will be extensive. Previously, if you asked a search engine a question it would provide a variety of links some of which might be full of disinformation while others might be accurate. You had a choice of what to believe.

But with ChatGPT generative AI tries to provide not a choice but a definitive, fully formed answer. That content can be images, sound, video or text. The user doesn’t evaluate the links, the computer does. And then it provides a short answer to the question, which includes key facts that the computer says supports its answer. The sources of that information are presented at best – on current evidence – as footnotes.

The danger of ‘hallucinations’

At first glance, one has a self-contained answer that appears entirely trustworthy. But in truth the reader has no idea if the source is the BBC, QAnon or a Russian bot, and no alternative views are provided. The danger is the answer might be what is termed a ‘hallucination’: an answer that is just wrong.

Nicholas Diakopoulos, an associate professor in communications studies and computer science at Northwestern University in Illinois, found factual inaccuracies in seven of the 15 news-related queries he asked ChatGPT. It begs the question that if generative AI learns from the data it has to hand, could disinformation claiming Covid vaccines are more dangerous than the virus be offered as an ‘answer’? If so, those fighting disinformation now face a formidable new opponent.

Thanks to the speed of AI advances, problems not even conceived of a year ago can suddenly loom large

Defenders of this new technology argue – rightly – that the veracity of these answers will improve over time, as the generative AI learns more. So hopefully the problem of ‘hallucination’ will diminish, but can tech platforms ever be sure it will be zero? And can they responsibly argue that there is any acceptable error rate for the most immediately harmful disinformation?

For news publishers, there is a different challenge. ChatGPT’s answers might draw from news sources but with scant credit to the news organizations themselves. This is not only a financial challenge but a challenge to the news ecosystem. How will users know to trust certain news sources if they are not made aware of them?

Regulation is slow

Regulatory interventions can help but they are no panacea. In fact, regulation can be used to suppress the truth. As we have seen in Russia, the label of disinformation is frequently used to smother free speech.

Organizations must assume responsibility for the information they create, share and recommend

For mature democracies, the problem is different. Regulation is slow. LLMs illustrate a phenomenon of the digital age: that technology advances at breakneck speed. Problems that were not even conceived of a year ago can suddenly loom large. Our regulations were not designed with LLMs in mind. And after ChatGPT, there will be something else. Regulators will always be playing catch up.

What is needed is a flexible, alert and vigilant way forward, building on those fledgling links between the tech platforms and trusted news providers. It is for organizations themselves to assume responsibility for the information they create, share and recommend, rather than be pulled along in the currents of digital change.

I am working at the University of Oxford’s Reuters Institute for the Study of Journalism to see how we can build on the work of the Trusted News Initiative to do just this. A new coalition – call it the Trusted News Network – could come up with a framework to help identify what moment-by-moment alertness to these emerging harms would look like.

It could also determine where accountability for this information ecosystem and how it evolves should sit: with governments, citizens, tech platforms and news publishers, and not ceded to digital determinism.

Global accountability

This new coalition needs to include many more organizations than the TNI did. It needs to include news providers and civil society organizations across the Global South as well as the Global North, alongside a wide group of tech platforms. Only then will the insights gathered about the harm of online information be shared more easily.

For the tech platforms this would offer a way of seeing problems their own systems haven’t picked up, and a way of sharing insights with their peers. This is a system that is already working in the fields of terrorism and child exploitation.

Content ctd

For news organizations, the coalition would be a place to share insights and talk directly to each other and the tech companies about concerns. It would be a new weapon to fight disinformation in its latest form.

We have a choice. Democracy relies on debate around undisputed facts. Organizations upholding democracy need to bring those undisputed facts to the people who matter most – the voters.

There are increasingly powerful value systems out there, notably in China and Russia, where facts are twisted to serve the authorities. We need co-operation if we are to make the coming technology work for the values we back.