The core assumptions of AI policy are in dire need of recalibration. Drawing from a wider range of evidence and perspectives would result in safe and more equitable outcomes for AI policy.
With startling consistency, artificial intelligence (AI) policy is underpinned by common assumptions about how AI will contribute to economic, societal and military advantage, how such ‘AI power’ can be harnessed, and how the technology’s known risks can be averted. As AI policies across the globe pass from theory to practice in the coming years, these common assumptions must keep pace with the facts. They must also be clear-eyed enough to account for all possible risks. Most importantly, these assumptions must be representative of the interests of all stakeholders who will be impacted by the technology and the rules that govern it.
So far, this has not been the case. Common AI assumptions are so often repeated that one would think they reflect a preponderance of evidence. But many of these assumptions are more like opinion than truth. They are dominant simply because they are unyielding to facts that challenge their status, hostile to caveats and inimical to nuance. And these views are not universal. Rather, they tend to reflect and serve a narrow but powerful set of interests, while minimizing perspectives from beyond the Global North and the male-majority tech sector, as well as perspectives centred around equality, sustainability and humanistic – in other words, non-technical – approaches to societal problems.
This does not bode well for the coming years of AI policy. The greater the gap between a policy assumption and the facts and people it supposedly represents, the greater the risk that measures built upon that assumption could result in harm. The more tightly that a policy adheres to assumptions that serve the narrow interests of one set of stakeholders, the less likely it is that the benefits of that policy will be distributed widely or fairly. The less amenable the policy sphere makes itself to voices that do not buy its received truths, the harder it will be to draw from the full spectrum of solutions that are needed to build robust and equitable outcomes.
This paper makes a bid to recalibrate the AI policy discourse. It highlights, analyses and offers counterpoints to four core assumptions of AI policy: 1) that AI is ‘intelligent’; 2) that ‘more data’ is a requisite for better AI; 3) that AI development is ‘a race’ among states; and, 4) that AI itself can be ‘ethical’. It focuses on these four assumptions because they have gone particularly unchallenged in policy documentation, and because they demonstrate how real harms can result from policy that is built upon assumptions that negate counterpoint perspectives. In challenging these assumptions, the paper offers a rubric for addressing other problematic AI assumptions. By illustrating how a more evidence-based, inclusive discourse yields better policy, it advocates for an ecosystem of policy innovation that is more structurally diverse and intellectually accommodating.
Some disclaimers
Though this paper critiques some of the fundamental assumptions underpinning government efforts to get ahead in AI, it does not advocate against governments taking seriously the disruptive potential of these technologies. States and their citizens have a sovereign right – within the boundaries of national and international law – to reach for the opportunities of novel algorithmic systems. But while optimism and a competitive spirit may be key drivers of technical progress, they are a poor basis for safety-critical regulations. This paper therefore advocates for responsible policy that does not withhold from deeming certain applications of AI as undesirable, or certain institutions as not being ‘AI ready’. It calls for parties to recognize that, given the power of AI (and the power of the organizations wishing to use it), the cost of acting with a critical eye may often be far lower than the cost of acting unquestioningly on an overly optimistic assumption that later turns out to be wrong.
Common AI assumptions are so often repeated that one would think they reflect a preponderance of evidence. But many of these assumptions are more like opinion than truth.
This is not to say that this paper proposes its own anti-risk dogma. It is possible that as a society we can accept a degree of risk in exchange for the possibility that someday AI might make good on its promise. But that is only an acceptable conclusion to reach if it has the buy-in of all relevant stakeholders – especially those who are most likely to suffer from the risks. If some parties to the debate object to the mere suggestion that a particular application of AI might not be worthwhile or that attaining truly ‘ethical AI’ is not necessarily a fait accompli, it will be impossible to achieve such consensus.
Nor does this paper argue that all policy assumptions are, in and of themselves, a bad thing. Any policymaking for a nascent technology will rely on some degree of supposition about what that technology will and will not do in the future. Certain widely held AI assumptions have already proven to be useful. For example, it is often noted in AI policy documents that all AI systems are liable to exhibit biases against certain groups. While there are, of course, exceptions to this assumption, policymakers can use it to ensure that the possibility of bias is never neglected in proposed measures and instruments. Yet even in this case, if those holding the assumption refuse to engage with counterpoints and emerging contrary evidence, it could become problematic to continue to adhere policy strictly to that assumption.
So, let this much be clear: any AI policy assumption is liable to become harmful dogma if not held open to honest, good-faith challenges. The most transformative AI policies will be those that engage with uncomfortable counterpoints to all predominant assumptions and engage with all under-represented perspectives – not just with the loudest voices in the room.
This paper is intended for a cross-cutting audience of parties to the discourse on AI strategy and policy, including policymakers, private sector stakeholders, commentators and advocacy groups. It is based on a review of national AI strategies, policy documents, AI bills, technical literature and critical commentary. Input was also collected through a virtual expert roundtable that was hosted by Chatham House on 2 March 2022.
Four AI assumptions and their counterpoints
Assumption: Artificial intelligence has unlimited potential to execute any task that ordinarily requires human intelligence, input, oversight and judgment.
Counterpoint: The technologies currently referred to as ‘artificial intelligence’ are inherently limited in their capacity to replicate human intelligence. Rather, they have only demonstrated themselves to be capable of imitating narrow facets of human intelligence in certain narrow tasks, and they could continue to be ineffectual for certain applications for many years to come.
Assumption: A principal enabler of AI development and deployment is data. Therefore, states wishing to increase their AI capacity should endeavour to collect, consolidate and distribute the greatest volume of relevant data.
Counterpoint: Not all applications of AI will necessarily benefit from the collection, centralization and distribution of data. Furthermore, any data collection and distribution activity carries serious risks. In some cases, those risks may outweigh the anticipated gains the data might yield for AI development.
Assumption: In order to succeed in international power competition, states must develop and deploy AI more widely and more quickly than their adversaries and peers.
Counterpoint: A race-like approach to technology development could stand at odds with a state’s capacity to adopt AI in a way that truly serves the common good.
Assumption: Ethical principles can be encoded into AI.
Counterpoint: Achieving ‘ethical AI’ requires expansive measures that extend far beyond strictly technical fixes, including – potentially – uncomfortable organizational and societal reform.