Tackle the ‘Splinternet’

Competing governance visions are impairing efforts to regulate the digital space. To limit the spread of repressive models, policymakers in the West and elsewhere need to ensure the benefits of an open and well-run system are more widely communicated.

Expert comment Updated 14 October 2020 Published 12 June 2019 5 minute READ

The development of governance in a wide range of digital spheres – from cyberspace to internet infrastructure to emerging technologies such as artificial intelligence (AI) – is failing to match rapid advances in technical capabilities or the rise in security threats. This is leaving serious regulatory gaps, which means that instruments and mechanisms essential for protecting privacy and data, tackling cybercrime or establishing common ethical standards for AI, among many other imperatives, remain largely inadequate.

A starting point for effective policy formation is to recognize the essential complexity of the digital landscape, and the consequent importance of creating a ‘common language’ for multiple stakeholders (including under-represented actors such as smaller and/or developing countries, civil society and non-for-profit organizations).

The world’s evolving technological infrastructure is not a monolithic creation. In practice, it encompasses a highly diverse mix of elements – so-called ‘high-tech domains’,[1] hardware, systems, algorithms, protocols and standards – designed by a plethora of private companies, public bodies and non-profit organizations.[2] Varying cultural, economic and political assumptions have shaped where and which technologies have been deployed so far, and how they have been implemented.

Perhaps the most notable trend is the proliferation of techno-national regimes and private-sector policy initiatives, reflecting often-incompatible doctrines in respect of privacy, openness, inclusion and state control. Beyond governments, the interests and ambitions of prominent multinationals (notably the so-called ‘GAFAM’ tech giants in the West, and their ‘BATX’ counterparts in China)[3] are significant factors feeding into this debate.

Cyberspace and AI – two case studies

Two particular case studies highlight the essential challenges that this evolving – and, in some respects, still largely unformed – policy landscape presents. The first relates to cyberspace. Since 1998, Russia has established itself as a strong voice in the cyberspace governance debate – calling for a better understanding, at the UN level, of ICT developments and their impact on international security.

The country’s efforts were a precursor to the establishment in 2004 of a series of UN Groups of Governmental Experts (GGEs), aimed at strengthening the security of global information and telecommunications systems. These groups initially succeeded in developing common rules, norms and principles around some key issues. For example, the 2013 GGE meeting recognized that international law applies to the digital space and that its enforcement is essential for a secure, peaceful and accessible ICT environment.

However, the GGE process stalled in 2017, primarily due to fundamental disagreements between countries on the right to self-defence and on the applicability of international humanitarian law to cyber conflicts. The breakdown in talks reflected, in particular, the divide between two principal techno-ideological blocs: one, led by the US, the EU and like-minded states, advocating a global and open approach to the digital space; the other, led mainly by Russia and China, emphasizing a sovereignty-and-control model.

The divide was arguably entrenched in December 2018, with the passage of two resolutions at the UN General Assembly. A resolution sponsored by Russia created a working group to identify new norms and look into establishing regular institutional dialogue.

At the same time, a US-sponsored resolution established a GGE tasked, in part, with identifying ways to promote compliance with existing cyber norms. Each resolution was in line with its respective promoter’s stance on cyberspace. While some observers considered these resolutions potentially complementary, others saw in them competing campaigns to cement a preferred model as the global norm. Outside the UN, there have also been dozens of multilateral and bilateral accords with similar objectives, led by diverse stakeholders.[4]

The second case study concerns AI. Emerging policy in this sector suffers from an absence of global standards and a proliferation of proposed regulatory models. The potential ability of AI to deliver unprecedented capabilities in so many areas of human activity – from automation and language applications to warfare – means that it has become an area of intense rivalry between governments seeking technical and ideological leadership of this field.

China has by far the most ambitious programme. In 2017, its government released a three-step strategy for achieving global dominance in AI by 2030. Beijing aims to create an AI industry worth about RMB 1 trillion ($150 billion)[5] and is pushing for greater use of AI in areas ranging from military applications to the development of smart cities. Elsewhere, the US administration has issued an executive order on ‘maintaining American leadership on AI’.

On the other side of the Atlantic, at least 15 European countries (including France, Germany and the UK) have set up national AI plans. Although these strategies are essential for the development of policy infrastructure, they are country-specific and offer little in terms of global coordination. Ominously, greater inclusion and cooperation are scarcely mentioned, and remain the least prioritized policy areas.[6]

Competing multilateral frameworks on AI have also emerged. In April 2019, the European Commission published its ethics guidelines for trustworthy AI. Ministers from Nordic countries[7] recently issued their own declaration on collaboration in ‘AI in the Nordic-Baltic region’. And leaders of the G7 have committed to the ‘Charlevoix Common Vision for the Future of Artificial Intelligence’, which includes 12 guiding principles to ensure ‘human-centric AI’.

More recently, OECD member countries adopted a set of joint recommendations on AI. While nations outside the OECD were welcomed into the coalition – with Argentina, Brazil and Colombia adhering to the OECD’s newly established principles – China, India and Russia have yet to join the discussion. Despite their global aspirations, these emerging groups remain largely G7-led or EU-centric, and again highlight the divide between parallel models.

The importance of ‘swing states’

No clear winner has emerged from among the competing visions for cyberspace and AI governance, nor indeed from the similar contests for doctrinal control in other digital domains. Concerns are rising that a so-called ‘splinternet’ may be inevitable – in which the internet fragments into separate open and closed spheres and cyber governance is similarly divided.

Each ideological camp is trying to build a critical mass of support by recruiting undecided states to its cause. Often referred to as ‘swing states’, the targets of these overtures are still in the process of developing their digital infrastructure and determining which regulatory and ethical frameworks they will apply. Yet the policy choices made by these countries could have a major influence on the direction of international digital governance in the future.

India offers a case in point. For now, the country seems to have chosen a versatile approach, engaging with actors on various sides of the policy debate, depending on the technology governance domain. On the one hand, its draft Personal Data Protection Bill mirrors principles in the EU’s General Data Protection Regulation (GDPR), suggesting a potential preference for the Western approach to data security.

However, in 2018, India was the leading country in terms of internet shutdowns, with over 100 reported incidents.[8] India has also chosen to collaborate outside the principal ideological blocs, as evidenced by an AI partnership it has entered into with the UAE. At the UN level, India has taken positions that support both blocs, although more often favouring the sovereignty-and-control approach.

Principles for rule-making

Sovereign nations have asserted aspirations for technological dominance with little heed to the cross-border implications of their policies. This drift towards a digital infrastructure fragmented by national regulation has potentially far-reaching societal and political consequences – and implies an urgent need for coordinated rule-making at the international level.

The lack of standards and enforcement mechanisms has created instability and increased vulnerabilities in democratic systems. In recent years, liberal democracies have been targeted by malevolent intrusions in their election systems and media sectors, and their critical infrastructure has come under increased threat. If Western nations cannot align around, and enforce, a normative framework that seeks to preserve individual privacy, openness and accountability through regulation, a growing number of governments may be drawn towards repressive forms of governance.

To mitigate those risks, efforts to negotiate a rules-based international order for the digital space should keep several guiding principles in mind. One is the importance of developing joint standards, as well as the need for consistent messaging towards the emerging cohort of engaged ‘swing states’. Another is the need for persistence in ensuring that the political, civic and economic benefits associated with a more open and well-regulated digital sphere are made clear to governments and citizens everywhere.

Countries advocating an open, free and secure model should take the lead in embracing and promoting a common affirmative model – one that draws on human rights principles (such as the rights to freedom of opinion, freedom of expression and privacy) and expands their applications to the digital space.

Specific rules on cyberspace and technology use need to include pragmatic policy ideas and models of implementation. As this regulatory corpus develops, rules should be adapted to reflect informed consideration of economic and social priorities and attitudes, and to keep pace with what is possible technologically.[9]

What needs to happen

  • Demystifying the salient issues, consistent messaging and the creation of a common discourse are key to advancing a well-informed debate on global digital governance.
  • The benefits associated with open and well-regulated digital governance should be clearly presented to all stakeholders. For example, the link between sustainable development, respect for human rights and a secure, free and open internet should take priority in the debate with developing countries.
  • International norms need to be updated and reinterpreted to assert the primacy of non-harmful applications of technologies and digital interactions.
  • This process should follow a multi-stakeholder approach to include under-represented actors, such as developing countries and civil society, and should adopt a gender-balanced approach.
  • The design of rules, standards and norms needs to take into account the essentially transnational nature of digital technologies. Rules, standards and norms need to be applicable consistently across jurisdictions.
  • Developing countries should be supported in building their digital infrastructure, and in increasing the capacity of governments and citizens to make informed policy decisions on technology.


[1] Including but not limited to AI and an associated group of digital technologies, such as the Internet of Things, big data, blockchain, quantum computing, advanced robotics, self-driving cars and other autonomous systems, additive manufacturing (i.e. 3D printing), social networks, the new generation of biotechnology, and genetic engineering.

[2] O’Hara, K. and Hall, W. (2018), Four Internets: The Geopolitics of Digital Governance, Centre for International Governance Innovation, CIGI Paper No. 206, https://www.cigionline.org/publications/four-internets-geopolitics-digi….

[3] GAFAM = Google, Amazon, Facebook, Apple and Microsoft; BATX = Baidu, Alibaba, Tencent and Xiaomi.

[4] Carnegie Endowment for International Peace (undated), ‘Cyber Norms Index’, https://carnegieendowment.org/publications/interactive/cybernorms (accessed 30 May 2019).

[5] Future of Life Institute (undated), ‘AI Policy – China’, https://futureoflife.org/ai-policy-china?cn-reloaded=1.

[6] Dutton, T. (2018), ‘Building an AI World: Report on National and Regional AI Strategies’, 6 December 2018, CIFAR, https://www.cifar.ca/cifarnews/2018/12/06/building-an-ai-world-report-o….

[7] Including Denmark, Estonia, Finland, the Faroe Islands, Iceland, Latvia, Lithuania, Norway, Sweden and the Åland Islands.

[8] Shahbaz, A. (2018), Freedom on the Net 2018: The Rise of Digital Authoritarianism, Freedom House, October 2018, https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digit….

[9] Google White Paper (2018), Perspectives on Issues in AI Governance, https://www.blog.google/outreach-initiatives/public-policy/engaging-pol….

This essay was produced for the 2019 edition of Chatham House Expert Perspectives – our annual survey of risks and opportunities in global affairs – in which our researchers identify areas where the current sets of rules, institutions and mechanisms for peaceful international cooperation are falling short, and present ideas for reform and modernization.