The revelations of two reports released by the US Senate Select Committee on Intelligence (SSCI) this week lay bare the extent to which Russia’s Internet Research Agency (IRA) manipulated social media during the 2016 US presidential election. They should serve as a wake-up call to Silicon Valley and a clear indication to policymakers that self-regulation for online platforms is no longer viable.
Trust in companies such as Facebook is already wearing thin, with former partners alleging the social networking giant is more preoccupied with PR damage control than solving the problems posed by influencing operations.
But the SSCI reports clarify how part of the business model of digital platforms – online audience segmentation – leaves citizens exposed to stealth targeting by foreign actors.
They show how digital marketing strategies relying on mass collection of data and private details such as an individual’s personal habits or geolocation records can be co-opted by malign actors to manipulate narratives, frame politically inconvenient individuals as threats and mould the information ecosystem to their needs.
Russia didn’t invent these strategies but rather took advantage of an already expansive and unsupervised network. Targeting social media users with personalized propaganda is the outcome of allowing digital platforms to develop business models which prioritized the desires of people as consumers over their democratic rights as citizens.
It’s these very business models, and the data collection practices that make them possible, that policymakers need to put anew under scrutiny.
This requires a broad-based rethink of how to regulate these practices, and how the implications of online influence operations – as well as their remedies – touch upon and create tensions between issues of human rights, political security, freedom of expression, privacy, social coherence and technological development.
In addition, policymakers need to consider how, in the process of evading oversight, influence campaigns can mutate and adapt to varying platforms, affecting different online spaces with distinct dynamics.
For example, according to the first-SSCI commissioned report (led by New Knowledge with contributions from Jonathan Albright and Canfield Research), Instagram – although this was largely downplayed by its parent company, Facebook – became gradually crucial in the IRA’s influence operations. As we head towards the era of deepfakes (artificial intelligence-created video, audio and images) and recontextualized images or memes, Instagram’s susceptibility to disinformation campaigns should not be taken lightly.
Just as important, the dominant players in the digital space need to provide – after evaluating and resolving privacy issues with regulators – researchers with the data necessary for a thorough and systematic analysis not just of the extent of the influence operations but also of their actual impact.
It is alarming that even after these revelations, Silicon Valley firms are so selective about the data they provide. Google’s decision to offer researchers from the Computational Propaganda Research Project and Graphika, who compiled the second report, a data set in PDF format instead of an easily machine-readable file would be amusing if it weren’t so serious. Both Facebook and Google have launched initiatives working with journalists and fact-checkers with varying records of efficiency, but their efforts would seem to pale in comparison to the damage done by the business models of the platforms themselves.
Steps such as Facebook’s Ad Archive, Google’s Election Ads Transparency Report, or the Honest Ads Act are not without merit. But the SSCI-commissioned reports also highlight that political ad regulation is not enough, as the online engagement figures (such as shares, likes and comments) from the IRA’s organic posts (messages created by an IRA-managed page or user account rather than an ad) were much higher than those of advertisements.
Additionally, influence campaigns have a long-term horizon and surely, in the era of the ‘permanent campaign’, regulation needs to step outside the frame of electoral cycles.
Failure to marshal scientific, academic, civic and policymaking resources to address these problems will continue to leave countries exposed. And malign actors will not hesitate to use these vulnerabilities, as these reports have made clear. They should signal the end of complacency.