To recalibrate the AI discourse, we need to shift our perspective, seek new policy assumptions, plan for the worst, and think pre-emptively about whether certain AI applications are actually worth pursuing.
In light of the issues and opportunities described in the preceding chapters, this paper makes the following recommendations.
Recognize assumptions as assumptions. All parties should actively flag when an assumption – as opposed to a ground truth – is being used as the basis for a policy, and provide a framework to consider that assumption’s consequences and counterpoint(s). For example, policymakers who recognize that the success of a particular government action hinges on the assumption that growth in the performance of AI systems will continue to be linear in the years ahead could consider the harms or losses that that policy might incur if AI performance flattens. Such an assessment should integrate non-technical perspectives, which could highlight potential externalities that would not be immediately obvious in a strictly technical analysis.
Recognize who these assumptions serve and consider whether they are representative of all stakeholders. As noted throughout this paper, policy assumptions are rarely neutral. They tend to serve a particular set of stakeholders’ interests. Identifying the interests embedded in assumptions will make it easier to highlight the political and ideological drivers of the AI discourse, and raise key questions as to whether any resulting policy would be truly representative of the full span of groups that are likely to be impacted by it.
Explore alternative or additional policymaking assumptions. Anticipatory governance relies on assumptions. Therefore, stakeholders should certainly not shy away from making any assumptions. Rather, they could seek out grounded, inclusive assumptions that might serve to guide AI policy alongside those assumptions that are already widespread.
Hope for the best but plan for the worst. A genuinely anticipatory style of AI governance anticipates failure and success in equal measure. That is, in addition to seeking the best-case scenario for AI development and implementation, policy measures should emphasize preventing the worst potential outcomes. When considering a potential role for AI or a potential set of policies, stakeholders should war-game the most detrimental potential outcomes of that policy or AI development and, as needed, include measures to hedge against those outcomes. If acted upon in good faith, such an attitude does not need to stand at odds with the technical community’s right to experiment, explore and innovate.
Measure state capacity to adopt AI in a way that truly serves the common good. Metrics of state AI capacity or AI readiness should be expanded to include factors such as openness and transparency of institutions; freedom of civil society and the press; rule of law; economic equality; and educational attainment in other fields outside of science, technology, engineering and mathematics.
Subject AI applications and organizations to ex ante audits. Today, there is a strong evidential basis for the claim that some applications of AI simply are not worth pursuing, either because their benefits could never outweigh their harms or because we lack the technical or sociotechnical capacity to ensure that they will be ethical. Yet, as noted above, the discourse often makes little room for a thoroughly anticipatory evaluation of potential risks. Therefore, one way to counteract AI risks is to establish pre-development assessments of a proposed system’s risks, and to weigh these against its expected benefits. In determining whether to engage in a particular government application of AI or to support the development by a private entity of an AI for a novel application (say through R&D grants), states could develop a process (perhaps executed by an independent body) that exhaustively evaluates:
- The anticipated net benefit of the system. This assessment must be based on a reasonable estimation of technical capacity (that is, an expected benefit cannot be contingent on an as yet unachieved technical breakthrough);
- The risks of the proposed system, both primary (e.g. what would happen if the system fails?) and secondary (e.g. would it require the creation or diffusion of a dataset that is vulnerable to abuse or attack?);
- Whether the entity that is developing and deploying the AI system has the appropriate sociotechnical capability/readiness to create a safe, fair system in a transparent and accountable manner;
- Whether the developing and deploying entity will have the capacity to consistently monitor, respond to, and be held accountable for unanticipated primary and secondary effects;
- Whether the system’s ethical problems have known clear solutions or would, instead, rely on unproven technical measures; and
- Whether non-technical or non-AI solutions (which have an existing regulatory infrastructure) could be used in place of the AI system to solve the same challenge with fewer risks.
Such evaluations would better enable states to predict and avoid risks and harms, and to focus more tightly on proven technologies and solutions. While there are certainly difficulties in anticipating all of the issues that AI might exhibit in real life, this process could be a valuable step in tempering strategies and policies that would otherwise encourage organizations to experiment with the technology as widely and quickly as possible, without regard for either the risks of doing so or the possibility that the system will not actually generate any gains.