Artificial Intelligence in International Affairs: Six Things the World Can Do to Prepare

For all of human history, politics has been driven by human activities and the interactions of humans within, and between, networks.

Explainer Updated 29 September 2020 3 minute READ

Now, advances in artificial intelligence (AI) hold out the prospect of a fundamental change in this arrangement. The idea of non-human entities, increasingly being used in international affairs, could create radical change in our understanding of politics at the highest levels. Find out the six things the world can do to prepare.

1. Governments worldwide should invest in developing — and keeping — home-grown talent and expertise in AI.

AI expertise must not reside in only a small number of countries — or solely within narrow segments of the population — as there is a danger that countries could become dependent on the expertise currently concentrated in the US and China.
 

Robots at the Beijing International Consumer Electronics Expo

Robots at the Beijing International Consumer Electronics Expo

Robots are presented at the Beijing International Consumer Electronics Expo in China. Image: Zhang Peng/LightRocket/Getty Images.

— Robots are presented at the Beijing International Consumer Electronics Expo in China. Image: Zhang Peng/LightRocket/Getty Images.

Section 2

2. Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals.

The humanitarian sector could benefit from such systems, which might, for example, improve response times in emergencies. Since such systems are unlikely to be immediately profitable for the private sector, a concerted effort needs to be made to develop them on a not-for-profit basis.

A Red Cross employee in Mexico

A Red Cross employee in Mexico

A Red Cross employee in Mexico works at the collection centre in the city of Toluca, Mexico on 8 June 2018. The Mexican Red Cross sent more than 130 tons of humanitarian aid to the people affected by the recent eruption of the Fuego Volcano in Guatemala. Image: Mario Vazquez/AFP/Getty Images.

— A Red Cross employee in Mexico works at the collection centre in the city of Toluca, Mexico on 8 June 2018. The Mexican Red Cross sent more than 130 tons of humanitarian aid to the people affected by the recent eruption of the Fuego Volcano in Guatemala. Image: Mario Vazquez/AFP/Getty Images.

Section 3

3. It should not be left to technical experts to understand the benefits — and limitations — of AI.

Better education and training on what AI is — and what it is not — should be made as broadly available as possible. Those developing the technologies would benefit from a greater understanding of the underlying ethical goals.

A student attends a lesson in robotics

A student attends a lesson in robotics

A student attends a lesson in robotics at the IT Lyceum at the Kazan Federal University. Image: Yegor Aleyev/TASS/Getty Images.

— A student attends a lesson in robotics at the IT Lyceum at the Kazan Federal University. Image: Yegor Aleyev/TASS/Getty Images.

Section 4

4. Developing strong working relationships between public and private AI developers, particularly in the defence sector, is critical.

Since much of the innovation is taking place in the commercial sector, ensuring that intelligent systems charged with critical tasks can carry them out safely — and ethically — will require openness between different types of institutions.

The U1208 Lab at Inserm

The U1208 Lab at Inserm

The U1208 Lab at Inserm studies cognitive sciences in robot-human communication. Image: BSIP/UIG/Getty Images.

— The U1208 Lab at Inserm studies cognitive sciences in robot-human communication. Image: BSIP/UIG/Getty Images.

Section 5

5. Clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while at the same time the risks are well-managed.

In developing these codes of practice, policymakers and technology experts should understand the ways in which regulating artificially intelligent systems may be different from regulating arms or trade flows while also drawing relevant lessons from those models.

Saudi Arabian citizen humanoid

Saudi Arabian citizen humanoid, Sophia

Saudi Arabian citizen humanoid, Sophia, is seen during the Discovery exhibition on 30 April 2018 in Toronto, Canada. Image: Phoo by Yu Ruidong/China News Service/VCG/Getty Images.

— Saudi Arabian citizen humanoid, Sophia, is seen during the Discovery exhibition on 30 April 2018 in Toronto, Canada. Image: Phoo by Yu Ruidong/China News Service/VCG/Getty Images.

Section 6

6. Developers and regulators should pay particular attention to the question of human–machine interfaces.

Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly in order to avoid misunderstandings that in many applications could have serious consequences.

U1208 Lab at Inserm

U1208 Lab at Inserm

U1208 Lab at Inserm in France studies robot-human communication. Image: BSIP/UIG/Getty Images.

— U1208 Lab at Inserm in France studies robot-human communication. Image: BSIP/UIG/Getty Images.

Link to report

The findings of this article are based on a Chatham House report ‘Artificial Intelligence and International Affairs: Disruption Anticipated’ by M. L. Cummings, Heather M. Roff, Kenneth Cukier, Jacob Parakilas and Hannah Bryce.