Decisions made in the significant digital centres of power – Brussels, Beijing, London and Washington, DC – may be influential in shaping global approaches to platform governance.
Around the world, laissez-faire approaches to platform growth are increasingly giving way to government intervention. However, this expansion in national-level scrutiny has not been matched by international cooperation on the substance of platform regulation.
Internet pioneers’ hopes of a single, unifying, global digital foundation have been realized in no small part. Never before have countries, economies, citizens and communities been so closely connected. This development has brought substantial benefits, in the spread of information, in access to economic opportunity, and in connections forged between individuals and communities around the world. From business to activism, the internet has allowed for global coordination to take place in novel and powerful ways.
But this success story should not obscure the costs. New digital jurisdictions have interacted poorly with existing national political and legal institutions, challenging sovereign nations’ capacity to protect their citizens, enforce their laws and set the fundamental norms of the societies they govern. Quite understandably, national platform regulation is now trying to address this capacity gap.
If the global internet has a future, it will be found in compromise and coordination between polities and economies able to find a settlement balancing national sovereignty and international interdependence and interoperability. Techno-libertarian hopes of cyberspace sitting outside the realms of the ‘weary giants of flesh and steel’ are unrealistic.
Unified language and concepts may provide some like-minded states with a common language. Building on David Kaye’s report to the UN as Special Rapporteur for Freedom of Expression, chapter 4 explores one of those concepts, asking whether a human-rights based approach may provide a route towards alignment between nations. As a long-standing framework with significant (if incomplete) global support, human rights may provide a valuable foundation for regulatory coalition-building.
The EU’s approach
- The EU’s power in setting the agenda for regulation is undisputed. The size of Europe’s market and its considerable soft power strengthen the case for global applicability of its approach to and influence on digital platform regulation.
- Europe’s collective approach and its core language of human rights make it compelling to other constituencies keen to leverage its legitimacy.
- However, paucity of enforcement and a growing emphasis in the tech industry on technical standards-setting threaten to undermine this advantage.
From data protection standards to standardized chargers for smartphones and other devices, observers point to the existence of a ‘Brussels effect’ in the area of regulation – i.e. the spread of European norms beyond Europe, as states and businesses elsewhere react to policy decisions made in Brussels. To an extent, this soft power is simply a function of the size of the EU market. But the inclusive, consensus-based and deliberative approach underpinning European policymaking adds further weight to legislative acts internationally.
European regulation is both values-driven – reflecting the EU’s democratic values, human rights and the plurality of opinions among EU member states – and strategic. Under the presidency of Ursula von der Leyen, the European Commission has sought to strengthen Europe’s independence in many areas of policy under the banner of ‘open’ strategic autonomy. The European approach to platform regulation has accordingly been characterized as a ‘third way’ – sitting between unfettered platform power and Beijing’s regime of close ties between government and large tech companies.
There is little doubt that European regulatory action has shaped digital platforms beyond its borders. Since Germany’s NetzDG law was passed in 2017, European national and EU rules around content moderation, data protection and digital advertising have led major digital platforms to choose compliance, often amending their standard global offering to meet the requirements of their large European markets. In a 2018 House Committee on Commerce and Energy hearing in the US Congress, Mark Zuckerberg confirmed that changes to Facebook made in response to the EU’s General Data Protection Regulation (GDPR) would be rolled out worldwide. However, the extent to which European regulations have led to genuine change is debatable, as is the extent of the threat of enforcement.,
Within Europe, an innovative mixture of regulatory packages has emerged, designed to update and rebalance the protections from intermediary liability provided by the EU’s e-Commerce Act (2000).
Within Europe, an innovative mixture of regulatory packages has emerged, designed to update and rebalance the protections from intermediary liability provided by the EU’s e-Commerce Act (2000). These initiatives include the 2018 voluntary Code of Practice on Disinformation; the 2022 Regulation on Terrorist Content Online; the wide-reaching DSA (which, along with its counterpart Digital Markets Act, begins to apply throughout 2023 and 2024); and, more recently, new proposals for addressing CSAM online. The DSA in particular establishes new obligations for digital platforms to be transparent with regulators and users about their content moderation practices, to have appropriate systems and policies in place to deal with illegal content once notified, and to follow strict rules regarding the use of user data for advertising purposes. For very large online platforms and very large online search engines with over 45 million users in the EU, additional obligations around mandatory risk assessment and mitigation and independent audits apply.
Member states will enforce these rules for regular-sized platforms through national digital service coordinators, whereas the largest platforms will be accountable to the European Commission for compliance, potentially limiting the extent of the ‘Brussels effect’. If a regional body is required to supervise the compliance of the largest (and most used) platforms, copycat legislation in individual states would not be enough to recreate the DSA’s system of accountability without extensive regional cooperation. However, the DSA undeniably sets a strong precedent for proportionate regulation of digital platforms that seeks to respect individual rights and freedoms. As such, the guidance for platforms and audit and transparency frameworks that the DSA produces are likely to serve as templates that many others will follow.
However, some caution is necessary when forecasting the future strength of the ‘Brussels effect’. Governance models for technology are in flux, and the growing importance of international technology standards requires a different set of approaches to the more traditional rule-making that the EU is used to. Continuing negotiations on digital platform regulation – particularly transatlantic ones – are inevitable, as although US platforms depend on European markets for growth, European citizens depend on US technology provision. Insofar as values-based lawmaking around digital platforms remains the primary way in which global regulatory efforts are made, the EU will continue to lead. But translating policy priorities and laws into technical standards is its own unique exercise and the EU is not currently able to compete with China in offering a ‘full stack’ of digital technologies, complete with standards and infrastructure, to developing countries seeking to digitize at pace.
China’s approach
- China’s approach to domestic digital platform regulation is primarily driven by the political agenda of the ruling Communist Party of China (CPC), with political stability its main aim.
- Despite significant regulation in recent years mandating improved user capabilities, platform transparency, data protection and changes to business practices, state surveillance and control of online space remain undented and, as such, the Chinese approach is unsurprisingly non-compliant with global human rights frameworks.
- The ‘Beijing effect’ is an example of how greater state control of a country’s domestic internet can be implemented, but not a blueprint for others to follow. Replicating China’s approach in countries where US platforms have a strong presence is likely to prove difficult, as most countries lack the resources necessary.
Beijing oversees a significantly greater centralization of control over technology platforms inside its borders than other governments. However, reports of total subjugation are overstated, as evidenced by recent tensions between business practice and popular opinion, and by the inclusion of limited user protections in Chinese platform regulation regimes.
On the one hand, the Chinese government relies on the cooperation of platforms to enforce effective control over digital content. On the other, it keeps a close eye on the expanding influence of large platforms, rolling out a series of regulations to keep big tech’s power in check.
The Chinese platform ecosystem is dominated by a few large domestic businesses – most notably including Alibaba, Baidu, ByteDance and Tencent – and largely excludes major Western competitors. The government has close ties with the leadership of platform companies; the preservation of ‘mainstream’ values is a core tenet of Chinese platform oversight. Over the past 10 years, China’s regulatory focus has moved from filtering sensitive keywords and punishing individual content uploaders to holding operators of online platforms liable for the content they host.
As such, domestic platform companies are not only required to comply with prescriptive regulatory requirements, but to devise their own rules to systematically ensure their platforms do not risk attracting unwanted government attention. Erring on the side of caution means that content deemed ‘politically harmful’ is strictly censored in China, and sanctioned categories remain vaguely defined and can cover a wide range of content ranging from insulting national heroes to subverting state power. This caution further leads to the deployment of proactive content moderation technologies, using both artificial intelligence tools and human labour. Chinese platforms often require users to register their real identity and to provide extensive personal information, such as mobile phone number, address and profession, to access services.
Large tech platforms in China cede extensive surveillance and control capabilities to the Chinese state. There remains, however, friction between the state and platform operators. Reporting on privacy abuse and the use of technology in exploiting Chinese workers has caused significant public outcry. The CPC has publicly stressed the need for technology platforms to serve the public and regulated to that end, though Chinese regulations have focused on business rather than on the state’s surveillance capacities. The Cybersecurity Law (2017), the Data Security and Personal Information Protection Laws (2021) and, most recently, the Internet Information Service Algorithmic Recommendation Management Provisions (2022) have all led to significant changes in platform design and business practices, as the state looks to curb platform power and emphasize its position as steward of the Chinese people.
No government has had greater success in carving out a national internet than China. Chinese state power over its domestic internet is likely the envy of authoritarian regimes around the world. The Beijing effect may therefore be to provide an ideal for authorities looking to secure or justify greater control over their citizens’ experience of the web. But it is less likely to become a model to replicate. This is partly due to the strength of US companies’ global presence, and partly to the immense domestic resource required to manage the internet in the way China does. However, a global shift away from the traditional rule-making for digital technologies associated with European approaches towards standardization as a model for internet governance would likely strengthen the Beijing effect, given China’s head-start in engaging with and influencing global telecommunications and digital standards bodies.
The UK’s approach
- The UK’s approach to domestic digital platform regulation is largely driven by a public conversation about online harms, with decision-makers keen to be seen to tackle high-profile instances of harm to individual users on the major platforms. This emphasis is in part tempered by concerns among some politicians, academics, public figures and citizens about over-regulation of speech.
- Global human rights frameworks are not key forces in shaping the UK’s approach to platform regulation. However, a focus on scrutinizing platform systems and on transparency aligns UK regulation methodologically with other global approaches.
- Despite this approach having broad international appeal, London’s influence on global regulatory norms may be limited by political barriers to international cooperation.
In March 2022, almost three years since the initial Online Harms white paper emerged and began the debate about digital regulation in the UK, the government’s Online Safety Bill was published. In the intervening period, the bill underwent significant revisions, and, even since this analysis was completed in autumn 2022, has been substantially amended in both houses of parliament. (For example, removing some provisions relating to legal but harmful content for adult users and strengthening the requirements for platforms to verify the age of all users.) The bill entered into law in October 2023.
Approaches to British digital platform regulation have largely been driven by a vocal and high-profile public conversation about online harm, and heavily informed by criminal legal norms.
Beginning in earnest around 2014 and prompted in part by the proliferation of content associated with Islamic State, media coverage of online platforms in the UK has for a decade now been relentless in highlighting harms and demanding action from the UK government against the largest and most influential platforms.
Civil society in the UK, however, remains split on the issue. Proponents of far-reaching platform regulation are led by childrens’ charities, high-profile whistleblowers and well-known voices in the media calling for issue-specific regulations. For example, the broadcaster and consumer rights campaigner Martin Lewis successfully called for the inclusion of scam advertising in the bill, while the model and television personality Katie Price led a campaign demanding ID verification as part of creating a social media account. On the other side of the debate, internet freedom and civil liberty organizations – including, among others, Article19, Demos, Liberty, the Open Rights Group – have raised significant concerns about the compatibility of proposed regulations with legal obligations, democratic norms and protections for freedom of expression, privacy and non-discrimination. Approaches to platform regulation in the UK coalesce around these two poles: a majority wanting to be seen to be tough on platforms, protecting children and tackling harm online; and a minority concerned about implications for existing rights and freedoms in the UK.
Criminal law frameworks have had a significant influence in shaping the UK’s approach to platform regulation. More imaginative approaches centred on the establishment of a statutory duty of care for adults have largely been replaced by criminalization of particular types of content or user behaviour: for instance, disinformation is now covered under a new criminal offence of foreign interference, established in the National Security Act of July 2023. Tackling cyberflashing also required a new criminal offence. Legal but harmful content was dropped from the Online Safety Bill before its approval.
Criminal law frameworks have had a significant influence in shaping the UK’s approach to platform regulation. More imaginative approaches have largely been replaced by criminalization of particular types of content or user behaviour.
Human rights frameworks have not featured prominently in the UK’s approach to regulation. Where rights are mentioned, they mirror the US approach in prioritizing freedom of expression. This emphasis is exacerbated by a political desire to diverge from EU approaches following Brexit.
Although the UK is an important market for major platforms, regional attention will be firmly on the EU and its approach to regulation. Moves by the UK to share its own view of best practice through a network of global digital platform regulators has been welcomed in Australia, Fiji and Ireland, but cooperation with regulators elsewhere is stymied by misalignment on what content to regulate and how. Given the significant resourcing behind the Office of Communications (Ofcom) and Ofcom’s commitment to publishing guidance for regulated platforms around the Online Safety Act’s passage, the UK may have gained some traction internationally by being a first mover on defining aspects of digital platform regulation.
US approaches
- US approaches to domestic digital platform regulation are rooted in the prioritization of market economics and promotion of a business agenda that provides space for tech companies to flourish and flexibility for states to define their own priorities.
- Individual states approach platform regulation in different ways. For example, California and Florida take widely divergent positions on the purpose, extent and deployment of appropriate platform regulation.
- The language of civil rights underpins US conversations surrounding a rights-based approach to platform regulation. The perspective and tone of existing laws and proposals focus on the US Constitution and Bill of Rights, rather than the Universal Declaration of Human Rights (UDHR) and other international legal mechanisms. This includes a heavy emphasis on the First Amendment of the Constitution and the US culture of litigation.
The US is home to dominant social media firms such as Google, LinkedIn, Meta (owner of Facebook, Instagram and WhatsApp), Pinterest, Snapchat and X (formerly Twitter). This capital – cultural, economic and social – provides the US with the capacity, connections and resources to dominate the platform governance landscape. But up to now, US legislation has sought to defend platform autonomy, putting the US at odds with other jurisdictions pushing for greater intervention. A historic reliance on industry standards over regulation has failed to translate to online platforms.
Language used at both ends of the US political spectrum has changed in the past years, with both Democrats and Republicans criticizing the autonomy afforded to platforms in making decisions on content moderation., Growing political polarization, however, limits the scope for bipartisan agreement on platform regulation. State positions are further apart still. In September 2023, California successfully passed bill AB 587, which requires social media companies to submit reports to the state by January 2024 on content moderation and policy decisions. Proponents claim this legislation is aimed at tackling ‘hate and disinformation’. Meanwhile, officials in Florida are seeking to limit the extent to which platforms can moderate content at all., While there is no comprehensive, national consensus on regulation, broad agreement among legislators on the problems caused by a lack of intermediary liability (often known as Section 230, in reference to a section of the 1996 Telecommunications Decency Act) is quickening the development of proposals promoting more bipartisan support such as the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, the Kids Online Safety Act and the Platform Accountability and Transparency Act. Regulatory change that challenges platform businesses is likely to be further slowed by industry lobbying. Technology companies spent a reported $55 million on lobbying the US federal government in 2021.
The US is also unlikely to promote an approach based on IHRL and international standards, as the civil rights movement has historically provided the basis for the defence of minority and constitutional rights, rather than human rights frameworks and language. Recent court cases and calls for legislative change use language specific to domestic US protections for civil rights and freedoms, such as the First Amendment of the constitution, rather than the fundamental and universal rights such as Article 2 of the UDHR. For example, the Anti-Defamation League’s report and subsequent policy on preventing anti-Semitic hate and harassment on social media focused solely on US civic rights.
Whether California will capitalize on its internal power and sway the US debate in favour of closer regulation is yet to be determined. However, with a lack of federal-level alignment, a singular US approach to digital platform regulation is extremely unlikely to emerge in the near future. While the EU-US Trade and Technology Council does act as a forum for debate and exchange on digital transformation and cooperation, the EU is therefore likely to remain the leading voice worldwide in calling for greater regulation.
Consensus and cooperation
As the previous sections show, wide gaps remain between the major centres of political power driving digital regulation. The US’s constitutional commitments to freedom of expression and its hesitancy to intervene in markets will be the determining forces in shaping the web, as US tech companies continue to dictate the rules and norms for the digital tools used by the global majority. Nevertheless, US dominance has not deterred authorities and jurisdictions with conflicting values. European regulations on data, platforms and digital advertising have put significant pressure on the dominant tech companies, with many of those companies adapting their products globally to meet European standards. Meanwhile, post-Brexit, the UK wants to be seen as providing a ‘third way’ on technology, balancing the twin aims of enabling growth and ensuring safety. It remains uncertain whether the Online Safety Act passed in October 2023 will add to the UK’s credibility on platform governance. China’s decision to foster its own digital ecosystem and strictly maintain its barriers is the clearest obstacle to any attempt to establish a global governance framework for online platforms. The Chinese vision is not an exceptional one, even if costly and difficult to implement. Many states worldwide would choose to pursue greater digital sovereignty at the expense of global connectivity, given the choice.
Despite this divergence, powerful forces are pulling in the other direction, towards greater alignment. Many would argue that a global internet is good worth pursuing in and of itself – indeed, universal global connectivity by 2030 is one of the UN’s Sustainable Development Goals. Demand from citizens and business for digital services hosted or operated by international companies is strong and growing, and participation in the global economy has for decades now been predicated on digital infrastructure provided by online platforms. For instance, in 2014 restrictions on access to the open source software development platform GitHub and a series of other platforms in India were quickly reversed after an outcry from the country’s tech industry. Current internet infrastructure is by design better suited to openness and connectivity than to the imposition of national borders.
In the near term, only those countries or geographies with both sufficient will and sufficient resources will be able to pursue a strategy of disconnecting from the US–EU version of the web, described in depth in the Four Internets paper by Wendy Hall and Kieran O’Hara. It is probable that only China has both the will and ability to build and maintain the full stack of digital infrastructure required to break away entirely, with the rest of the world becoming in effect a vast ‘Venn diagram’ of porous internets built around national languages, cultures and platforms but accessible to all, and controlled crudely. This control is more likely to be exercised through blocking access to individual websites or to the internet itself, rather than by implementing new standards or protocols. Even China must allow some internet traffic through the ‘Great Firewall’ in support of national and international businesses operating in the country. In the medium to long term, though, Chinese leadership – as demonstrated through trade agreements and influence in international standards bodies – and the export of Chinese digital standards and infrastructure could bring other countries into the Chinese internet.
Strong reasons for maintaining the status quo remain. The internet familiar to most users is shaped by an uneasy digital hegemony negotiated between the US and EU. Access to digital services, markets and platforms is enormously significant to businesses and citizens around the world. However, the process of agreeing joint roadmaps, principles and regulation for digital goods and services between the EU, the US and their partners is fiercely contested. Recent regulatory initiatives like the EU’s DSA and the UK’s Online Safety Bill have prompted significant criticism from prominent voices in the US tech sector, such as from Signal’s Meredith Whittaker on encrypted communications and Wikimedia’s Rebecca Mackinnon on age verification. Meanwhile, US inaction exasperates regulators on the other side of the Atlantic. Countries outside of traditional multilateral forums feel frustrated and unable to influence the technological landscape that their citizens increasingly depend on. While global regulatory alignment is unlikely, better cooperation and dialogue between countries reliant on shared digital infrastructure are essential. Threats made by both companies and governments to withdraw services or raise barriers should not be taken lightly.