Responsible development of AI cannot occur in silos. It needs to be jointly and cooperatively guided, through global processes for reconciling competing interests and agreeing priorities. Now is a critical time for action, while innovations such as generative AI are still in their infancy.
In late December 2023, a group of MIT researchers published their discovery of a new class of drug compounds that could kill antibiotic-resistant MRSA, a deadly form of drug-resistant staph bacteria. To accomplish this, the researchers used artificial intelligence (AI) to aid in their discovery and calculate potency predictions, an approach that also opens the door to designing more useful drugs in the future. Drug discovery is just one of the many frontiers where experts expect AI to change current paradigms: not only in science but also in work, communication, media and the knowledge economy. A new wave of powerful technologies is showcasing just how far AI has come, both in interpreting almost unimaginably complex data and – in some applications – emulating human-like thought processes.
AI-fuelled change evokes a spectrum of emotions. Leaps forward in medicine and science bring enormous excitement; threats of disruption and questions of safety bring apprehension and concern. These mixed emotions are decades old: fears of technological disruption have run in parallel to the growing centrality of AI to our daily lives.
Empowerment and disruption
Just like any revolutionary general-purpose technology, AI will have diverse impacts. In part, it will empower; in part, it will disrupt and present dilemmas.
In terms of empowerment, AI can make resources and skills available far more widely. For example, AI in translation services has bridged communication gaps on a global scale, fostering collaboration across diverse cultures. ‘Generative AI’ – a type of artificial intelligence that can generate original content, ranging from text, images and music to code and synthetic data, after learning from a set of data inputs – presents an opportunity for more people to draw on legal, educational or medical expertise that would previously have been unaffordable or inaccessible. Generative AI and new learning tools are also revolutionizing teaching. The advancements offer personalized and adaptive study experiences, catering to individuals’ learning styles and pace, redefining the educational landscape, and challenging conventional structures and norms of knowledge acquisition and dissemination. For example, AI can tailor educational content to meet the individual needs of students, adjusting the difficulty level, suggesting resources based on learning styles, and providing personalized feedback to help students (and teachers) improve. AI’s roles in medical research, such as AlphaFold’s contribution to predicting the structures of proteins with remarkable accuracy, and in the democratization of coding skills through intelligent coding assistance demonstrate AI’s capacity to lower the barriers to entry in many spheres.
If AI is poised to drive a revolution in what is possible in science and technology, it is equally poised to disrupt. Economies, organizational structures, social contracts, and individual beliefs and opinions are all set to change as the next generation of AI becomes widespread. These changes will bring a responsibility to manage the risks and challenges posed by AI: ranging from the potential for AI-generated content to deviate from factual accuracy (leading to what are termed ‘hallucinations’, or misrepresentations of reality) to the redefinition of jobs and conventional instructional roles and approaches. The impact of AI on labour markets – where, for example, its efficiency and automation capabilities can lead to significant shifts in employment patterns – necessitates a re-evaluation of job roles and skill requirements.
Now is a critical time, while this next stage in the technology is still in its relative infancy, for governments, regulators, businesses, academia and the public to educate themselves and one another about this technology and its impact, and together to prepare for and negotiate the changes – positive and negative – AI will bring. Critical to this will be managing the pressures and competing goals that could impede a coordinated and coherent response, whether across industry or at national or international level.
Rising tensions
The heightened focus garnered by the very public commercialization of generative AI tools means businesses face a more competitive landscape. There is an increased emphasis on speed to market, as companies strive to gain a commercial advantage by adopting more powerful AI tools. Embracing AI can provide businesses with enhanced infrastructure. It can facilitate advancements in products or processes, help to attract top talent and employees, expand user or customer bases, and lead to valuable insights and possibilities. However, the growth of AI also threatens to affect trust between businesses, potentially weakening prospects for cooperation critical to effective multi-stakeholder processes.
AI is also poised to disrupt relations between governments. Pressure to ensure that the economic, market and national security benefits of these technologies are reaped locally potentially places governments in a race against one another. Developing regulations that reward national or regional AI development – while placing constraints on the import or export of AI technology – will heighten competition between nations. It could also hinder trust between business and government, for example encouraging more fractured and protectionist policies if governments – suspicious of the reach and intentions of transnational tech firms – seek to restrain such firms’ borderless operations.
AI will also ask new questions of the relationship between citizens and states. Throughout history, shifts in technology have resulted in disruptions and economic hardships for individuals. Governments have often been forced to adapt accordingly, to ensure continued provision of citizens’ basic needs in relation to safety, prosperity and economic opportunity. As AI alters the social contract in new ways, in fields from employment to politics to security, governments will again have to be responsive. And they will have to manage this disruption while also confronting the new risks posed by states that do not share a common purpose – think Russia, for example – for which AI potentially offers a tool to strengthen their power, sow division or enable new forms of international aggression.
Where the rise of AI differs most from previous technological shifts, however, lies in the nature of AI itself. AI is inherently complex, is evolving rapidly, and for the first time seeks to mechanize the human ‘thought’ process. This can make it difficult to fully understand or explain. It also makes the trajectory of AI development hard to predict, complicating policy decision-making concerning its risks and impacts, and creating new, existential fears among individuals. The future landscape is unknown, and the role that cognitive labour may have in an AI-driven world is uncertain. With forecasts ranging from the extremes of machine dominance to an AI-powered utopia (with the likely reality being an unknown state somewhere between the two), we are in some ways navigating uncharted waters.
As multiple pressures and competing interests build around the development of AI, it will be critical for humanity to find a common path and pursue the collective interest. In many ways, AI is only as good as the training it is given, the rules and regulatory frameworks that govern its operations, and the specific applications in which it is utilized. Reflecting this breadth in its collective governance will be crucial.
Towards cooperation
There are several immediate steps we can take in navigating the competing pressures around AI development, and in directing that development constructively and towards a common goal.
First, governments and industry should make use of existing capacities. Existing laws on privacy, intellectual property, discrimination, competition and transparency all touch on questions of AI development and deployment. There are skills and expertise found in international treaty organizations, multilateral institutions, standards bodies, research consortiums and open-source communities that can support global cooperation. Existing principles on responsible innovation, and frameworks used by businesses and non-governmental organizations (NGOs), could serve as cross-industry models for businesses that both build and use AI or incorporate it in their operations.
Second, where waters are truly uncharted, it is imperative that stakeholders cooperate to identify and address genuine gaps or deficiencies within existing regulatory frameworks, standards and self-governance models. Partnerships between technical standards bodies and regulators could provide greater understanding of whether desired regulations can be put into realistic and executable practices. Such partnerships could support innovation while providing much-needed clarity to enable businesses of all sizes to comply with expectations and best practices for safe and responsible AI development.
Third, cooperation turns on equal access and transparency. This means ensuring wider availability of adequate physical ‘compute’ resources, shared public datasets and AI expertise (access to information and training that enable the development of AI), so that a global community of academic researchers, open-source communities and NGOs can contribute to AI development in an environment in which research and policy formation occur as transparently as possible. Governments that seek to encourage development of AI locally can partner with each other to provide repositories of public data, ‘sandbox environments’ in which to test models safely, and forums in which to discuss responsible AI principles. Companies that develop and use these technologies will require (a) alignment on quality, safety, reliability and fairness benchmarks; (b) the ability to publicly share details around training data without putting their intellectual property or users’ privacy at risk; and (c) neutral forums for developing ‘watermarks’ and other mechanisms for indicating the types of content users may interact with. These steps will support the development of AI that better protects societies from AI misuse or AI-driven misinformation, while driving greater understanding of the benefits of AI-generated content and outputs.
The message is clear. The work to develop AI cannot be done in silos, which means we need to overcome the competing pressures that drive ‘silo-ization’. Leaving critical decisions in this area to be made solely by those who develop the technology – in effect, trusting that answers to complex problems will be solved later – is not an option. Nor can we govern AI without understanding it. The debate needs to expand beyond those small sections of society that traditionally develop digital technology or regulations. It needs to include voices that offer a more diverse representation of society – not only across age, gender and race, but also in terms of geography, profession, culture and economic status.
Crucial to this is recognizing that AI is no longer just a tool, but a general-purpose technology requiring collective governance. This means finding common ground through research, regulation and international cooperation, and agreeing on global priorities while the latest generative AI technologies are still relatively nascent.
All this may sound overwhelming, even insurmountable. But it is not. Humanity has worked through the impacts of complex technologies before: the introduction of the printing press, electricity, the railways and the internet. For over a decade, we have lived with AI being integrated in our lives in ever-increasing ways. We have already started building part of the social contract needed to govern AI. We are not starting from scratch. By committing to a common purpose and investing in the infrastructure of cooperation, we have the potential to shape a more positive and flourishing future society in which AI is used to the benefit of all.