David Shrier is a leading expert on large-scale, technology-driven change. He is a Professor of Practice, AI and Innovation at Imperial College Business School and co-Director of the Trusted AI Institute. He advised the European Parliament on the EU AI Act and has acted as an adviser to the Commonwealth Secretariat during the development of the Commonwealth Fintech Toolkit. He co-founded the Trust::Data Initiative at MIT, bringing private sector companies and governmental organizations together, and created the Data Academy for the United Nations, to provide training on how to use data in humanitarian crises. His next book, ‘Basic AI: A Human Guide to Artificial Intelligence’, is due in January 2024 (Harvard Business Publishing; Little Brown).
We have heard a lot of warnings from big names in AI, and in your latest article in ‘Horizons’ journal you call AI ‘humanity’s greatest existential crisis’. Is the panic around AI justified?
Panic should not be justified in any regard but intense concern and focus absolutely are required. We need to put a great deal more effort into resolving how AI plays out in human society over the next few months and years, as well as how we can leverage it to tackle humanity’s biggest challenges.
AI may be the greatest opportunity to finally resolve some critical issues of inclusion and equity, of climate crisis, of human health and longevity, a host of areas where we’ve been investing for years and haven’t seen the results we want. In fact, and this is a little bit of a paradox, AI may both be the greatest challenge to our construct of labour and the economy and also our only solution to the pending demographic cliff.
It is something where we need a lot more understanding of what these risks and dangers are and what to do about them, but also what these opportunities are and what to do about them.
You have coined the phrase ‘flash growth’ to describe how technologies disrupt societies. What does it mean?
We are familiar with the concept of the ‘flash crash’. As we began adopting more technology systems in the financial markets, we began to see these primarily AI-driven trading systems behave erratically and produce sudden sharp changes in the prices of securities and commodities, risking the stability of the markets. That’s a flash crash.
What we are starting to see more recently in the past two years is something that I’ve termed ‘flash growth’, which is a very sharp and sudden adoption of something new. Something interesting has happened over the past 24 months, and it is harvesting the dividend of decades of government investment and telecommunications infrastructure.
We have large-scale communications networks that have been put in place and increasingly lower-cost onramps to access those networks. The cost of minutes is going down as well as the cost of the devices. HTC [the Taiwanese company] now makes an Android smartphone that you can buy in Africa for less than £20 per handset. We are increasing the access, availability and inclusion of mass technology, right?
We have started to see applications and services built on top of these networks get very inexpensive and rapidly adopted. And high-performance computing. Think Google, IBM or AWS computer clusters. You now have these massive machines that are plugged into these networks using your low-cost Android handset and the low-cost network to access Tik Tok, reaching a hundred million users within nine months or, more notably, ChatGPT, reaching a hundred million users in six weeks.
Now you get society-scale adoption of these new technologies happening very rapidly. If you think about the pace of government policy and government policy interventions, it’s measured in years and sometimes decades. The pace at which these new technologies are becoming widespread is happening in weeks or even days. And that is what I mean by flash growth.
And the question is, how does government respond?
What can governments do now to future-proof their oversight of tech like AI?
We need parallel streams because you don’t want regulation made in haste. What we don’t want to do is to destroy innovation in an effort to protect society. We need to do both. What that means is the regulations that we put in place still need to follow a consultative process that involves multiple stakeholders, which is by its very nature slow.
Principles-based regulation is our recommended path forward as opposed to rules-based regulation. At the same time, supervisors and regulators need to be trained on these new technologies so they can adapt existing rules and existing regulations to deal with changes while we are waiting for the longer, more deliberate, contemplative and consultative process to catch up.
Some domiciles have been good about continuously training regulators and building access points for regulators to talk to innovators and to see innovations in sandbox environments before they are deployed into a market. Other domiciles, unfortunately, have not invested in educating their regulators, and they are now lagging badly in the global competitive landscape around this massively disruptive technology of AI.
AI could add 10 per cent growth to global GDP by 2032 – £10 trillion of growth. But who will harvest that growth? There will be winners and losers, and it has to do with this differential investment in both government capacity and then in government support of private sector action.
So, who has got it right?
The United States and China have absolutely invested a tremendous amount of money resourcing into AI and AI development. They are starting to realize the AI dividend. The UK interestingly enough has the highest AI productivity per capita of any country on the planet. So, with much smaller budgets and a much smaller population, Britain is number three in the world behind the US and China. Other notable domiciles include Israel, Switzerland and to a degree India, although they have some constraints.
You have significant experience in bringing together the public and private sectors to work together around technologies. What has been your biggest takeaway?
For thinking about society scale innovation, Mariana Mazzucato at UCL has done some excellent work. The private sector may be important for innovation and for pushing forward on disruptive technologies, but actually it is government investment that has driven a lot of the innovation that we are now enjoying. So, we should not be cutting government programmes in the hope the private sector will step in, but rather looking at better public-private cooperation platforms.
Government investment into academic programmes remains critical. Ideas are invented at great research universities. Corporate R&D budgets have been steadily declining for years.
Secondly, translational mechanisms or engines to take that university research and bring it out into commercial application are important. We need policy frameworks that are fit for purpose. And so, at every stage government has a role to play in supporting the institutions and supporting the translation.
We have seen Elon Musk and Mark Zuckerberg courted by world leaders. When it comes to digital tech and AI, the private sector plays an outsize role. Is this a problem?
It is problematic if it is only them. There is a very big concern about the concentration of power and technologists. AI is no exception to this idea that maybe only eight companies will control our future. The issue is not having those eight companies at the table – it is only having those eight companies at the table.