With elections scheduled next year in some of the world’s largest democracies, including Indonesia, the United States and India, there are concerns that voters will never be certain that what they see and hear in the campaign is real.
Truth has long been a casualty of war and political campaigns, but now there is a new weapon in the political disinformation arsenal – generative AI that can in an instant clone a candidate’s voice, create a fake film, or churn out bogus narratives to undermine the opposition’s messaging. This is already happening in the US.
A political ad published by the Republican National Committee shows a dystopian scenario should President Joe Biden be re-elected: Explosions in Taipei as China invades, waves of migrants cause panic in the US and martial law is imposed in San Francisco.
‘I actually like Ron DeSantis a lot. He’s just the sort of guy this country needs,’ Hillary Clinton can be seen confessing in a surprising video apparently endorsing a Republican.
Both are fakes generated by artificial intelligence.
The first high-stakes election will take place in Taiwan in January 2024 to choose the successor to President Tsai Ing-wen, who is ineligible to run for a third term. The outcome will set the tone for Taipei’s relationship with Beijing. It is likely to be a contest between the ruling Democratic Progressive Party, which sees itself as a bastion against authoritarianism, and the Kuomintang, which favours closer ties with Beijing.
Taiwan’s voters are expected to be the target of China’s formidable army of about 100,000 hackers. Considering 75 per cent of Taiwanese receive news and information through social media, the online sphere is a key battleground. In this election AI can act as a force multiplier, meaning the same number of trolls can wreak more havoc than in the past.
Democratizing ‘disinfo’
Until now, people intent on mischief have been constrained by not having the expertise or access to sophisticated tools, explains Audrey Tang, a former hacker who is now Taiwan’s minister of digital affairs. Generative AI, which creates text, images and video by copying patterns of existing media, is democratizing ‘disinfo’ by making it simple, cheap and more convincing.If the capabilities of large numbers of people intent on doing harm can be amplified through partial automation, ‘then that creates a new threat model’, Tang says.
An endless bombardment of divisive and defeatist propaganda has taught Taiwanese society to be vigilant. Media literacy and critical thinking are taught in schools, while citizens and ‘civic hackers’ can report suspicious material.
Organizations such as the Taipei-based cyber monitoring group Doublethink Labs use AI to analyse disinformation campaigns to neutralize them quickly. But Chihhao Yu of the Taiwan Information Environment Research Centre, a civil society organization, says AI can make it harder for researchers to detect dubious content and it can also misleadingly create sensational material appealing to a wide range of readers, ‘making the information environment even less functional for reasonable public discourse’.
Doublethink Labs has seen AI being used already. When 700 Chinese Communist Party fake accounts were blocked on Facebook, the hackers used AI voice generators to read out the biased texts over an AI filmed background and posted it on YouTube. Chihhao says the usual disinformation themes are that Taiwan’s government is illegitimate, America is untrustworthy and simply after Taiwan’s chip industry, and China is good and powerful.
The key issue is whether the material can stir people’s emotions so that they spread the message or even change their voting pattern, says Wei-Ping Li, a research fellow at Taiwan FactCheck Centre. She worries about AI’s capacity to overcome cultural and language barriers that usually make manipulation easier to spot.
Fact-checkers will find it harder to detect AI messages where the use of language is closer to the way people talk in Taiwan. Taiwanese influencers are also being paid by Beijing to reinforce its propaganda.
‘Rumour bombing’
In the US, with a year to go until a momentous presidential election, what impact AI will have on the vote is still an open question. But there are unsettling possibilities. Carah Ong Whaley, academic programme officer for the University of Virginia Centre for Politics, fears AI could multiply the scope of past efforts to suppress voter turnout.
In 2020, far-right activists arranged thousands of robocalls to discourage residents of minority neighbourhoods in the Midwest from voting. Whaley worries that AI technology could not only reach more people but add fake audio of trusted politicians or public figures on robocalls to make misleading messages more credible.
Carl Miller, research director at the Centre for the Analysis of Social Media at the London think tank Demos, cites cases of ‘rumour bombing’ in American battleground states to deter voters. They were bombarded with bogus social media messages, such as ‘there’s a shooting at the voting booth so the roads are closed’ or ‘the lines are over six hours long so don’t come’. Or even, Miller added: ‘The Democrats have stolen all the pens.’
It is hard to measure how effective these tactics are, but adding AI would allow the creation of unlimited numbers of distinct messages.
Swing voters, who usually make their minds up in the final days of the campaign, are also a target. Darrell West, a senior fellow for the Brookings Institution, wrote in a report in May that AI enables far more precise audience targeting.
Campaigns can now access detailed personal data about swing voters, from what they read to which TV shows they watch, to determine the issues they care about, and then send them finely calibrated messages that might sway their choice: ‘AI will enable campaigners to go after specific voting blocs with appeals that nudge them around particular policies and partisan opinions,’ West concluded.
Miller is worried about a more insidious form of manipulation – texts that appear to be from friends and exploit a sense of kinship and belonging. If bad actors wanted to target a narrow audience of 10,000 people in the US, Miller explains, they would create an AI-generated fake American persona with a good online backstory and then reach out to their prey. It is possible to have 10,000 parallel conversations in a partially AI-mediated discussion with the chat bots doing the heavy lifting.
Gradually the bad actors would begin to insert the type of information they want people to see via seemingly harmless links. Once the content they are seeing is controlled, they might craft something manually to make a sharper political point, perhaps to encourage them to go on a protest march about divisive topics such as gun violence or race. Different tactics can be used until they find something that works.
Health warnings
It is straight out of the behavioural science textbook but the victims have no idea they are being covertly manipulated. And since this takes place in the quieter corners of the internet, neither does anyone else, such as reporters or fact-checkers who could sound the alarm. On Google and YouTube, political ads using AI will soon need to carry a prominent health warning.