Responsible AI and Society

This project aims to foster a lively public debate on what is responsible AI in practice.

Server room network cables in New York City, November 2014.

Measuring and understanding the impacts of AI systems is a critical democratic requirement.

Around the world, governments and international bodies are recognizing that new norms and approaches are needed to mitigate harm and improve current solutions.

At the same time, those outside industry are often ill-equipped to assess whether an AI developer’s actions are consistent with their stated principles. This project seeks to bridge that gap.

Building on its existing research on AI governance and in cooperation with Google, Chatham House is hosting a series of inclusive, diverse and policy relevant activities on the responsible development of AI and its impact on online information systems.

The goal of this project is to foster a lively public debate on what is responsible AI in practice, its risks and benefits for our common information space as well as provide a multi-stakeholder consultative platform to inform its development.

This project is supported by Google and the Omidyar Network.