5. EU and US Cooperation: Opportunities and Challenges
Rising political polarization, allegations of subverting democratic processes and an increasingly aggressive far-right that uses online platforms – both legacy and digital – to amplify its message, have demonstrated the impact of technology and social media companies’ structures. Policymakers on both sides of the Atlantic are still grappling with their technical complexities, network dynamics, social nuances and political implications. This lack of deep understanding of technology’s affordances, the market dynamics of the new information space and the absence of a robust international framework to address this global issue, render democratic states vulnerable to the insidious influence of disinformation campaigns threatening to upend norms, the rule of law, institutions and social trust.
Extrapolating legal scholar Lawrence Lessig’s assertion about the internet’s architecture, digital intermediaries could fill the role of ‘a kind sovereign governing the community that lives in that space’. This would mean that leaving such an online community would entail high exit costs due to the social capital users develop within it. Digital platforms are political actors in their own right, which mould ‘the global infrastructure of free expression’.
Democratic governments – not corporate tech actors – have a positive duty to protect the rights of the citizens that elect them. To overcome the disinformation issue and its manifold externalities it is necessary to meaningfully interrogate the underlying infrastructure and the business models that enable the rise of disinformation. The current information and power asymmetry has to be addressed decisively. After drip-feeding data sets to researchers and funding disinformation projects, tech companies are still being criticized for being conveniently selective in what they provide.
Despite the merits of self-regulation as an agile, short-term remedy that the EU’s CoP embraced, pre-emptive self-regulatory pledges should not be seen as an alternative to statutory regulation. In the current information market, digital intermediaries are lacking the incentives to self-regulate efficiently as that would place them at an economic disadvantage. If the tide is changing it is partly due to a series of scandals that relate to infringements of users’ privacy or vulnerabilities for manipulation. The scale of digital intermediaries’ reach and the opacity of their operations merit immediate public scrutiny.
The delicate balance between regulating content and distribution, demands the active engagement of a broad range of stakeholders including journalists, civil society, academia, politics, the legal profession and technology. Simply privatizing content moderation without meaningful transparency and redress mechanisms can lead to censorship. AI-powered moderation tools purported as a solution to disinformation should be also available for external audits, as they have proved either unreliable or subject to biases.
The EU and US diverge in terms of constitutional and human rights priorities – e.g. freedom of expression vis-à-vis privacy or surveillance and security – and the trade-offs they have settled with feed into their non-aligned approach to disinformation. Aggravating the complexity of coordinating regulatory efforts is the fact that the debate in the US revolves around freedom of expression and the framing of efforts to constrain the power of Big Tech as being anti-free market, when in the EU freedom of expression is a qualified right that has to be balanced with other rights such as privacy. The EU sees regulation as mainly a systemic issue and seeks to address disinformation by looking across different domains, from tech regulation to privacy frameworks and information markets, while the US institutional approach indicates an interpretation as an agent problem and allocates responsibility to federal agencies to focus on manipulation campaigns of state actors like Russia, Iran or China. Nevertheless, investigations launched this year by US Congress committees and the FTC, indicate the tide may be changing and a window of opportunity for alignment may manifest.
This paper suggests that EU and US policymakers should not get fixated on specific agents as such an approach would be counterproductive towards building resilience.
In any case, this paper suggests that EU and US policymakers should not get fixated on specific agents as such an approach would be counterproductive towards building resilience. Adversaries are learning from each other, and so should long-term allies like the EU and the US. US agencies tend to work with EU counterparts on a bilateral basis so there is obviously room for improvement by moving to more multi-stakeholder forums. GEC, as the main coordinator of US efforts could bring other US agencies into the fold, and the FTC would need to be involved too. Coordination is paramount and reactively addressing the problem is not enough.
EU and US cooperation in tackling disinformation should be grounded on common principles and awareness of common values. Apart from the disrupting and divisive impact of disinformation campaigns in the political debate, influence operations impinge on the autonomy of citizens themselves. Article 19 of the Universal Declaration of Human Rights (UDHR) that seeks to protect citizens’ right to hold opinions without interference can be used as an ethical compass in the debate. Under the UN Guiding Principles on Business and Human Rights, business enterprises – including digital intermediaries – have a responsibility to respect human rights throughout their operations.
It is only through an effective US and EU collaboration on the issue of regulatory reform and disinformation countermeasures that we can set meaningful baseline norms, avoid a regulatory patchwork, duplication of efforts and overcome jurisdiction conflicts. The US and the EU should also lead the effort of convergence in terms of data protection. Security and privacy are intertwined.
While in the EU data-driven disinformation campaigns have ignited debates about data governance, the US is slowly but surely joining in. Data is crucial in campaigning. According to Brendan Fischer, director of federal reform at Campaign Legal Center in the US, ‘you have organizations that are supposed to be operating independently of candidates, sharing data with candidates which in many ways is more valuable than giving them money’.
The two sides of the Atlantic do not share perfectly aligned positions on notions of privacy, but the EU’s GDPR displays normative power, with legal experts and policymakers in the US considering its merits. Nevertheless, establishing a balance between meaningful and comprehensive data subject rights on one hand, with a public and private surveillance infrastructure seen as vital to national security, will be extremely challenging.
In the search for common ground, the US and the EU should use the existing human rights framework enshrined in international treaties as a basis for their efforts to tackle disinformation
US scholars have proposed another legal framework that could serve as common ground in order to hold digital intermediaries to account: their ‘duty of care’ in regards to their customers and a ‘duty to deal’, in regards to sustaining market competition. The unwarranted data mining operations that social media companies have been engaged in, which eventually supercharged tailored disinformation campaigns, and the lock-in effects of their architecture are symptoms of market failure – a side-effect of oligopolistic competition between leading technology companies. Also, as noted earlier, the concept of duty of care if not appropriately redefined in the digital space, is bound to prove inefficient.
In the search for common ground, the US and the EU should use the existing human rights framework enshrined in international treaties as a basis for their efforts to tackle disinformation. A closer examination is required to find out which rights are undermined by disinformation, and how they are effected. The current ambiguity in terms of the effectiveness of psychographically tailored, microtargeted disinformation is not conducive to create appropriate benchmarks in terms of regulation.
Policymakers need to coordinate their actions with an expansive research community already working on disinformation, to signpost its current and perspective vectors. Companies will need to provide all the data deemed necessary for robust evaluation of the scale and various manifestations of disinformation. While considering the privacy of their users, companies need to become more accommodating to the resource needs of researchers and oversight bodies. Any industry with such a record of irregularities would have already been closely scrutinized.
In terms of the actions of digital intermediaries, despite their cooperative efforts with independent journalists, legacy media and fact-checkers, one should not lose sight of the fact that in a time of dwindling digital ad revenue for news publishers, Google and Facebook in 2017 accounted for more than 80 per cent of the global digital ad spend, excluding China. After years of struggling to adjust to the digital transition, legacy media are still not out of the woods and with Big Tech dominating such a huge portion of ad revenue, many outlets have been forced to seek alternatives such as renewed subscription models and donations.