UK policy responses to AI have focused on promoting private sector innovation. But widespread growth from AI is unlikely until it has earned the public’s trust. To build AI systems that strengthen fairness, honesty and creativity across the UK, a new public-service AI institution is needed – in short, a kind of BBC for AI.
We are told the fourth industrial revolution is here, and that the UK is on the front foot. Since the launch of the UK’s National AI Strategy in 2021, the promise of artificial intelligence (AI) to unleash ‘productivity, growth and innovation across the private and public sectors’ has been a common political refrain. AI seems to be everywhere – except, for now, in the productivity statistics. Techno-optimistic soundbites from government and industry have done little to improve the UK’s public mood around technology – a mood subdued by warnings of job losses, by injustices like the Horizon IT scandal, and even by fears about the possibility of AI-driven extinction. One recent poll showed that only 18 per cent of British people are optimistic about AI.
AI’s trust problem must be addressed. UK policymakers must recognize that the link between AI inventions and productivity growth is not automatic. Instead, history shows that such growth will only occur when the public trusts technologies enough to adopt them deeply into daily economic life. That’s why the UK needs a new public-service AI institution to help society navigate the technological changes ahead: a ‘British AI Corporation’ or BAIC, modelled roughly on the BBC. Such an institution could earn the public trust by building accountable AI systems that help solve important problems, powered by a self-funding financial model that can sustain this essential 21st-century infrastructure indefinitely.
In search of a trustworthy partner
Early UK efforts to strengthen public trust in AI have been insufficient. This partly reflects the fact that the public sector has largely forfeited its role as a builder of trustworthy technologies in general, most often by outsourcing technical capabilities to the private sector. Despite the extraordinary public outcry after Fujitsu’s failed Post Office computer systems, justice has been slow and the company’s technology remains embedded in UK public infrastructure. Meanwhile, a contract worth up to £330 million to build a data platform for the National Health Service (NHS) was recently awarded to a consortium of private companies led by US-based Palantir, in spite of opposition from the British Medical Association and the Doctors’ Association UK, the latter of which called for work to be paused to ‘ensure public trust, value for money, a trustworthy partner and patient consent’.
The next wave of technologies is on track to be dominated by the same handful of international firms responsible for the last 20 years of consumer-facing digital technology.
At the same time, there is a growing sense that the private sector also cannot be relied on as the only route to trustworthy AI systems. The next wave of technologies is on track to be dominated by the same handful of international firms responsible for the last 20 years of consumer-facing digital technology. This concentration of market power is not just bad for competition. It also promises to create a lasting source of mistrust, as the gulf widens between the goals of the public and those of the private AI labs already locked into a race to secure market dominance by being the first to build human-level intelligences.
Thus far, the UK government’s strategy has been to navigate this tension by creating innovative regulations that will make these ‘frontier AI systems’ safer. But policymakers have not engaged at all with the primary concerns of the public – namely, the degree to which AI will affect jobs and society. As a result, polling shows that only 18 per cent of the British public trust tech companies to build AI responsibly, and that only 14 per cent trust the government to regulate it responsibly. There is little reason to believe the situation will improve. That is why a new approach is needed.
One chapter in British history illustrates how transformative technologies can both drive growth and increase trust if a public option is empowered to lead the way. After the First World War, Britain looked nervously across the Atlantic as the new technology of the day – radio – became an overnight commercial success in the US. The UK government was faced with a dilemma. Radio offered the prospect of supercharging growth in the nascent domestic electronics manufacturing sector. Yet early radio culture was seen as a dangerous outgrowth of American capitalism, powered by machines resembling scientific apparatus that no one wanted to bring into their homes.
It was at this moment that the BBC was founded in 1922 to find a way to drive technological adoption by balancing the need for growth with the need to protect British values. These goals were built into the BBC’s institutional design. Its original funding was directly tied to growth: it took a cut of the income from every radio set sold. This meant the BBC had to work hard to invent usages of the new technology that the British people would actually want. In the early days, as the BBC’s first director-general, John Reith, put it, ‘Few knew what they wanted, fewer what they needed.’ That’s why the BBC decided to go further, aiming ‘to carry into the greatest number of homes everything that was best in every department of human knowledge, endeavour and achievement; and to avoid whatever was or might be hurtful’. This ambition to meet public needs was soon enshrined in a mission to ‘inform, educate and entertain’.
The bet to build a public broadcaster paid off. Technological adoption and manufacturing growth were swift. Yet only in the long run has the true impact become clearer. Just as railways connected the regions of the UK in the 19th century, the BBC became a key part of the infrastructure of 20th-century British life, dependably connecting citizens with the arts, the state, and the truth.
These outcomes stand in sharp contrast to the bitter experiences of recent infrastructural history. With examples ranging from misinformation on social media to sewage in our rivers, we have seen the dangers that arise when private interests control the networks that connect us. That is why the time is right to invest in a trustworthy new AI partner that operates in the public interest.
A British AI Corporation
Drawing upon the UK’s rich history of innovative public infrastructure, the government should establish a public option for AI by creating a new ‘British AI Corporation’ – a BAIC rather than a BBC, as it were. This new institution would ensure that everyone has access to powerful, responsibly built AI capabilities. Yet the BAIC should be more than just a head-to-head competitor with the private AI companies. It should be set up with an institutional design that empowers it to chart an independent path, building innovative digital infrastructure in the public interest.
The BAIC should be founded with a clear charter to which it can be held accountable. At the heart of this charter must be a mission with the clarity and timeliness of the BBC’s: to build AI systems in the public interest that strengthen fairness, honesty and creativity throughout the UK. This mission would ensure that rather than focusing on the most profitable or amusing use cases, the BAIC would be compelled to address problems that matter most to the British public: shaping a fairer society rather than increasing inequalities, amplifying the truth and not misinformation (or disinformation), and ensuring that AI empowers artists rather than automating away creativity.
Achieving these goals will require creativity in turn. That’s why a charter must be complemented with seed funding to ensure the institution’s independence. Such funding would grant the BAIC time and space to experiment, fail and learn, as start-ups in the private sector are often able to do. To reduce costs, the BAIC should be given preferential access to the new public computing infrastructure being built across the UK, conditional on the new institution complying with its mission. With that support, a modest initial investment in the order of £250 million would immediately make the BAIC one of the largest players in the London tech scene, allowing it to hire hundreds of experts at market rates for several years to bring world-class AI systems to market.
Lastly, the new institution must be fully incentivized to build AI systems that the public wants to use. This can be achieved via a financial model predominantly underpinned by AI product usage rather than by state funding. This growth-based funding was one of the secrets of the BBC’s early success. Depending for revenue on the public uptake of BAIC-developed AI systems would focus minds on removing barriers to adoption. Such a model would generate revenues that both finance the institution’s operations and allow it to invest in future innovation.
Building trustworthy infrastructure
With these institutional design features setting it up for success, the BAIC could start solving problems that matter most by building trustworthy AI infrastructure that includes all of society. First, a new BAIC should tackle AI’s trust problem by becoming a model for a publicly accountable tech company. Many currently opaque decisions throughout the AI development process could and should be subjected to public input and scrutiny: ranging from deciding which problems to solve at the start of the planning process all the way ‘downstream’ to ensuring that outputs generated by AI systems align with shared values. The BAIC will inevitably get some things wrong. When this happens, it must admit its mistakes, change direction and embrace its accountability to the public.
Trust could also be earned by building AI systems in a fair way that strengthens the commons rather than enclosing it. While many profit-driven tech companies say they want to put the best in every department of human knowledge into users’ hands, few are willing to pay for this. Instead, they are often incentivized to ‘free-ride’ by scraping the public domain for data without permission. The BAIC could and should play fairly. It could partner with cultural institutions that have the vital role of maintaining and expanding our collective knowledge, and voluntarily pay for access to their ‘data troves’. This public-interest data infrastructure would ensure that, as AI grows, so too would the funding available to the institutions working hard to steward the UK public’s inherited commons responsibly.
Public-interest data infrastructure would ensure that, as AI grows, so too would the funding available to the institutions working hard to steward the UK public’s inherited commons responsibly.
Second, the BAIC should tackle the UK’s productivity growth problem by doing more to address the concerns of businesses and employees. At present, the AI sector is extremely difficult for most companies and organizations to influence, much less compete in, due to the presence of a few well-funded incumbents focusing primarily on their own market share. This makes it harder to achieve bottom-up growth that could drive productivity. A new BAIC could build infrastructure that the Competition and Markets Authority has identified as having the potential to lower barriers to entry, such as systems to make it easier to switch models and datasets across platforms. This work might do for AI what Channel 4 did for broadcasting, by helping to create a new ecosystem in which small firms can build world-class services and share them with the world.
The BAIC could also stimulate growth by building AI that makes the workplace fairer, addressing employee concerns that AI will spark a ‘job apocalypse’. The public are understandably concerned that disruption will occur as new technologies develop faster than policymakers can regulate them effectively. As an AI developer accountable to the public for making life in the UK fairer, the BAIC would offer an extra layer of protection. The BAIC could set out a principled vision for ethical automation, and then lead the way by translating this vision into useful products. The BAIC could innovate by prioritizing capabilities that complement rather than replace labour. And if certain functionality is found to decrease fairness, an independent BAIC could price it differently to reduce its impact or could simply switch it off. These safeguards could reduce resistance to automation and spark responsible productivity growth.
Finally, a BAIC could help ease the public sector’s dependence on private contractors. Although there have been promising early signs of the British civil service investing in the recruitment of in-house AI talent, the direction of travel is still very much towards contracting with private AI platforms to design and supply government systems. This not only creates privacy risks and threatens to be poor value for money – it also represents a missed opportunity to inspire public sector innovation. Instead, a BAIC could identify the core feature sets required by both civil servants and the public for a given AI application, and start competing for the relevant contracts. The resulting product would become shared infrastructure owned by the public. This would transform procurement from a process in which millions (or billions) of pounds disappear into private sector contracts into a transparent process of public investment – giving taxpayers a better deal and establishing a pathway towards long-term financial sustainability for the new BAIC.
With a roadmap deeply aligned with their own values and priorities, members of the British public could stop worrying about AI, and simply get on with finding clever ways to integrate it into their lives.