The idea of using machines to recreate human work has a long lineage. In the West, it dates back to the ancient Greek god Hephaestus, said to have created mechanical servants and a man of bronze, Talos, to protect Crete from invaders. Aristotle noted: ‘When the loom spins itself and the lyre plays by itself, man’s slavery will be at an end.’
Yet the idea that a computer might achieve human-like ‘intelligence’ received its most important push by the mathematician and computer scientist Alan Turing in 1950. In his essay ‘Computing machinery and intelligence’, published in the journal Mind, he outlined the basis of a ‘learning machine’ that, like a child, was given some basic logic and then taught over time. ‘We may hope that machines will eventually compete with men in all purely intellectual fields,’ he wrote.
In subsequent decades, AI made progress but never matched the ambitions of its early pioneers. Over the past decade, however, something extraordinary happened: techniques that previously hadn’t worked very well started to show impressive advances because processors became more powerful and there were vastly more data to train the algorithms (hence the idea of ‘big data’).
Today, machine learning, and in particular a sophisticated variant called ‘deep learning’, are at the heart of many applications that may seem almost magical. Video-recognition technology can lip-read speech; character-recognition systems can decipher written script; photo software identifies people in pictures; voice-recognition systems respond to speech; and real-time computer translation is moving from the research laboratory to commercial products. And this is to say nothing of self-driving cars and applications for medical diagnosis. AI is behind the rise of the robots too. The huge advances in robotics in recent years are largely attributable to machine-learning algorithms.
It is successes like these that call to mind Turing’s hope that ‘machines will eventually compete with men’, or echo Aristotle’s vision that autonomous things may end ‘man’s slavery’. Yet advances in AI are also compelling economists and others to consider its impact in three main areas: productivity and output; employment; and international trade and development.
Impact on productivity and output
The changes arising from the development and application of AI may be on par with those resulting from electrification or of computing
There is a good chance that AI will bring about a boom in productivity. The changes arising from the development and application of AI may be on par with those resulting from electrification or of computing. One of the pioneers of the internet, Scott Bradner of Harvard University, once described the internet revolution as ‘the platform for all subsequent revolutions’. The same can be said of AI. It is a general-purpose technology, and as seen with computers, the innovations that take place in AI will spark revolutions in every domain that it touches.
Several investment banks and consultancies have attempted to forecast the economic value of applying AI to existing activities by industry and geography. Bank of America (BoA) Merrill Lynch in 2015 said it expected that in 10 years, robotics and AI would produce annually an ‘impact value’ of between $14 trillion and $33 trillion, potentially including, at the upper end of this range, $8–$9 trillion in cost reductions across manufacturing and healthcare, $9 trillion in employment cost savings, and some $1.9 trillion in efficiency gains by autonomous cars and drones.
By a different methodology, in 2013 the McKinsey Global Institute came to the range of $10 trillion to $25 trillion per year by 2025 for robotics, AI and data-intensive activities like the ‘internet of things’, industrial sensors, etc. McKinsey further estimates, according to a study published in 2017, that automation could raise productivity growth worldwide by 0.8–1.4 per cent per year. A 2016 report by the Analysis Group, funded by Facebook, estimated the ‘reasonable range’ of economic impact of AI over the next 10 years at between $1.49 trillion and $2.95 trillion. (Note that the period covered by the Analysis Group is 2016–26, whereas the forecasts by BoA and McKinsey only begin in 2025.)
A report by Accenture in 2016, based on economic modelling by Frontier Economics, estimated that the widespread use of AI in society and business had the potential to double the annual economic growth rate by 2035 in the dozen developed economies studied (the US, 10 Western European countries and Japan). The report forecasts, for example, that in the case of the US, absorption of AI in the economy would increase the rate of growth in gross value added (GVA) – a close approximation to gross domestic product (GDP) – from a baseline 2.6 per cent to 4.6 per cent in 2035, equivalent to an additional $8.3 trillion GVA per year.
In 2017, moreover, PwC forecast that GDP worldwide could be as much as 14 per cent higher in 2030 because of AI technologies, which it valued as potentially contributing some $15.7 trillion to the global economy. The majority of the gains would, in its assessment, come from retail, financial services and healthcare in terms of greater productivity, enhanced products and higher demand.
The competition among these institutions to produce eye-watering estimates as to AI’s potential value appears to match the rivalry in the AI industry itself. Their figures vary widely, but what is clear is that the sums involved are vast, and that a huge amount of value is predicted to be unleashed.
How might this happen in practice? Consider just one area of AI: self-driving cars. If all taxis, ride-shares and long-haul delivery vehicles are autonomous, then a huge amount of labour costs will be saved (though some of that saving will be spent on the AI technology that makes the transformation possible). The vehicles will, moreover, use less fuel, emit fewer pollutants and have fewer accidents, and human former drivers will be freed to do other economically and socially useful things. Accordingly, McKinsey estimates that the potential economic impact of autonomous cars and trucks alone could be in the wide range of $200 billion to $1.9 trillion per year by 2025. Society will gain the value of transport services at less cost – and perhaps therefore increase its use of transport because it is now cheaper. Meanwhile, the savings can be allocated to more productive uses, such as investment in new ideas.
Next, consider a specific job: a human welder in a car factory. Today in the US, a worker in this job earns around $25 per hour, including benefits. The equivalent operating cost per hour for a robot is around $8, including installation, maintenance and depreciation. By 2030 the cost could fall to $2 per hour, estimates the Boston Consulting Group in a study published in 2015. Even that sounds conservative: it means the price would halve twice in 15 years. From the experience of price–performance drops due to computing technology, a fall of an order of magnitude seems more likely, i.e. to 80 cents per hour. In any case, the cost of labour to produce vehicles declines.
The more interesting benefit – and the sine qua non of AI – is in doing things that no group of humans could reasonably hope to achieve. Business is already moving towards that vision
These two examples are just the beginning. They assume that AI and robotics do the same thing as people, only more cheaply. The more interesting benefit – and the sine qua non of AI – is in doing things that no group of humans could reasonably hope to achieve. Business is already moving towards that vision.
An example is Otto, a large German online retailer (unrelated to the American self-driving trucking company of the same name). Its data showed that customers were more likely to return items if they arrived three days or more after ordering, meaning that the company needed to shorten the shipping time to improve customer service and protect its margins. To do this, Otto deployed a predictive model using a deep learning algorithm by a company called Blue Yonder that was originally developed for analysing experiments at CERN, the European Organization for Nuclear Research. The algorithm analysed some 200 variables (such as season, weather, colour, style, size, etc.) to predict what shoppers would buy, a full month before the purchases actually happened, enabling Otto to have stocks in place so it could dispatch goods immediately after orders were placed. The system is fully automated. In 2017 Otto’s automated procurement system was purchasing 200,000 products a month completely on its own. The project was a triple win: it got most merchandise to customers within two days, it reduced returns by 2 million items a year (thereby saving Otto millions of dollars) and it also helped the environment, since fewer packages were sent back. It would be a significant, if not impossible, endeavour to assemble a team of humans to crunch through 200 discrete variables to buy millions of items in anticipation of customer orders. But an AI algorithm can do this.
It is difficult to calculate the increase in productivity resulting from AI. Productivity tracks output per worker. Total factor productivity (TFP) attempts to measure the improvement from technology by considering output in terms of a weighted average of capital and labour. The ‘residual’ is innovation – or, as economists quip, ‘a measure of our ignorance’. These metrics fail to capture the value that AI creates for two reasons. First, the output component is based on GDP, and GDP only measures monetary transactions. If the service is free, it is invisible as output. Second, and worse from a measurement perspective, if the free, AI-infused service replaces a paid-for service, GDP actually goes down – and with it measured productivity.
This is a very real problem for AI. To take one example, Google upgraded its web-translation service in the autumn of 2016 from using classic statistical techniques to relying on deep learning. This made the translations considerably more accurate. If a user chose to rely on Google’s free translations rather than pay someone to translate a document, the economic value of AI would be clear – but it would have a negative effect on GDP, and thus on productivity. The problem is compounded with each upgrade, because the performance improvement is invisible to economic metrics.
The biggest advantage of AI is in performing tasks that are entirely new, or at least far beyond reasonable human capacity. Yet these are the very tasks prone to mismeasurement
Moreover, the biggest advantage of AI is in performing tasks that are entirely new, or at least far beyond reasonable human capacity. Yet these are the very tasks prone to mismeasurement. To paraphrase the economist Robert Solow, the value of AI is ‘everywhere but in the productivity statistics’.
In the case of AI, some economists are questioning whether this time really is different. Can we expect a breakaway in productivity arising from AI? Or will it produce only a modest blip? In a paper issued by the Brookings Institution in 2016, David Byrne of the US Federal Reserve Board and colleagues noted that, notwithstanding growing mismeasurement, productivity has lagged over the period 1995–2004 precisely when IT deployment soared. The implication is to not expect too much from the latest round of digital innovation.
There is a view, put forward notably by Robert Gordon of Northwestern University, that AI and robotics will bring about only minor advances in TFP compared with the extraordinary gains of the early 20th century. This view, though popular, needs to be tempered. Just because earlier productivity gains from technologies such as railways and the telephone are more evident to economists does not refute the assertion that something unique is happening with AI. The assessment may also be the victim of bad timing. Professor Gordon began promoting his research findings in 2012, the same year that the foundational papers in deep learning were published, suggesting that his conclusions may not have taken account of the latest breakthroughs in AI technology.
Meanwhile, from a highly mathematical econometrics perspective, William Nordhaus of Yale University does not believe that AI will produce an ‘economic singularity’ – a point at which technology produces exponential growth as far as the eye can see.
One way to reconcile the apparently poor productivity figures of recent years is to look at firm-level performance. There has been a marked phenomenon of ‘the best versus the rest’ – i.e. a tremendous boom in productivity by the very best firms (those in the 90th and 99th percentile of performance) and productivity stagnation by the rest. The outsized performance gains – attributed in part to technology and automation – are camouflaged by the broader, middling economy. For instance, according to the Organisation for Economic Co-operation and Development (OECD), between 2001 and 2007 labour productivity at the ‘global frontier’ (referring to the top 5 per cent of firms in terms of multi-factor productivity) grew by 4–5 per cent per year, compared with annual growth averaging just 1 per cent by the rest. The firms at the forefront of their industries are more likely to be pioneering users of AI.
Andrew Haldane, the chief economist at the Bank of England, has identified the same trend in an analysis of 300,000 British firms. The gross value-added per person was seven times greater among firms in the 99th percentile than among those at the median. The ‘winner-take-all’ profile is brutal: those at the 99th percentile are three times more productive than those at the 90th percentile – already among the very best-run companies.
In light of this, some economists embrace the idea that rather than fear a world where we have AI, we should be alarmed by a world where we have too little of it.
To measure the performance of AI, a panel convened by the US National Academy of Sciences released a report in April 2017 that recommended, among other things, the establishment of an ‘AI progress index’. Comparable to the consumer price index (CPI) as a means of tracking inflation, the index would track progress on specific AI and machine learning technologies. This is a fascinating idea, even if it is prone to statistical wrinkles, like CPI itself.
What is clear is that human welfare and living standards are likely to improve as people can enjoy more goods and services – although the gains may be very unequally distributed
Ultimately, however, no one can say for sure what the value of AI will be for the economy. What is clear is that human welfare and living standards are likely to improve as people can enjoy more goods and services – although the gains may be very unequally distributed. By improving efficiency and lowering costs, AI will likely mean more output and thus perhaps lower prices for consumers. This may free up capital to be deployed more efficiently elsewhere. However, to return to the examples above, what will this mean for the car driver or welder?
Impact on jobs and wages
The orthodox view is that AI will lead to a ‘jobs apocalypse’ whereby mass unemployment becomes the norm. Numerous reports and books have sounded the alarm, predicting massive job losses. The historian Yuval Noah Harari goes even further in his book Homo Deus, arguing that AI will render a vast swathe of humanity ‘useless’.
The potential solutions put forward range from taxing robots to decoupling incomes from jobs. There have also been calls for more substantial interventions, such as the state providing a ‘universal basic income’ (UBI) to redistribute the gains from the owners of the means of production who deploy AI to the vast army of the unemployed who are AI’s victims. (The Marxist language here is deliberate, in so far as there are parallels to draw between the nascent AI economy and the industrial revolution of Karl Marx’s day.)
Economic history shows that automation creates new jobs around the new processes, and that these new jobs still require people
However, some economists see the view that there is a fixed amount of work to go around, and that if robots take it, humans will be left with nil, as a ‘lump of labour fallacy’. Economic history shows that automation creates new jobs around the new processes, and that these new jobs still require people. ‘Technology eliminates jobs, not work,’ noted a US government report on automation published back in 1966.
It is a fractious and essential debate. On one side are tech-minded people close to the advances, on the other are economists who are more removed but who bring history and a longer view. This chapter leans towards the optimistic outlook, but does not make a case for it. Instead, it will first present the argument that AI may obliterate jobs, and then put forward reasons why it may not. It will close on an area of consensus – that wages may suffer.
The idea of a jobs apocalypse was sparked by a study in 2013 by Carl Benedikt Frey and Michael Osborne of Oxford University. They concluded that as many as 47 per cent of jobs in the US are susceptible to automation over the next several decades. The corresponding share for the UK is 35 per cent. The paper notably does not say that jobs will disappear or give a specific timeframe; rather that they are susceptible over some decades.
The Bank of England, in an informal exercise using the same methodology, put potential job losses even higher, at around half the British workforce. A study by PwC estimated that some 30 per cent of British jobs are vulnerable to automation from AI and robotics by the early 2030s; the comparable estimates for the US and Germany are 38 per cent and 35 per cent, while that for Japan is somewhat lower, at 21 per cent.
AI is being introduced to many different industries at the same time, unlike during other periods of jobs upheaval such as the early industrial revolution
One fear is that job destruction will happen faster than new job creation. AI is being introduced to many different industries at the same time, unlike during other periods of jobs upheaval such as the early industrial revolution. Then, for example, if a weaver was put out of work through the introduction of ‘power looms’, there were other industries to turn to. With AI, many sectors may all shed jobs at once. Other than retraining as an AI engineer – an option open only to a few – there will be no place to turn.
The emblematic case is that of taxi drivers and truckers once self-driving vehicles hit the road. The debate continues as to when that may happen; early estimates by some in the industry, at around 2020, are unrealistic. It may take considerably longer for full ‘level-five’ autonomy to be viable, where the machine is in complete control. Nevertheless, the day will likely come. What will happen to employment, considering that there are 3.5 million truck drivers, 230,000 taxi drivers and 500,000 school bus drivers in the US alone? For this reason, many people believe that AI will create a massive upheaval in the jobs market.
The threat to jobs, though real, might not be so dire. Despite the rapid research advances, it will take time for new applications to be adopted, meaning that the economy will have time to adapt. And alongside AI technologies, ‘human’ skills of personal interaction, teamwork and emotional intelligence will continue to be needed in the labour force. Moreover, the crux of AI technology is not that it recreates what people can do, but that it may exceed that capacity millions-fold, performing tasks that no group of humans could possibly achieve – so there is a case to be made that rather than destroy jobs, AI may in fact liberate people, for whom the new work may be more fulfilling. Economic history offers mixed lessons in this regard, but mostly falls on the side of optimism.
Forecasts of mass unemployment arising from AI are open to criticism. For example, the Frey and Osborne study noted above overstates the problem by dint of its methodology, including its focus on jobs not tasks. Among the occupations that it concluded were most liable to automation were models, baristas, cooks and manicurists. These are the very jobs that people associate with a human touch – which might actually command a premium in a world teeming with bots and algorithms.
Jobs are comprised of tasks, which themselves vary in the degree to which they can be automated. For example, much of a lawyer’s job can be done by software, like finding the right precedent and constructing arguments. But among the valuable activities that lawyers fulfil is to ease the anxiety of clients in life-altering situations such as divorce, criminal charges or inheritance disputes. The work is about empathy, not just answers. Strikingly, in the US, as legal software has been introduced, since 2000, the number of law clerks and paralegals has grown – and at a faster rate than the overall workforce – rather than declined.
A task-based analysis is subtler. According to a 2016 McKinsey study of more than 2,000 work activities across 800 jobs, around half of the activities in the global economy could be automated with current technology. About 60 per cent of occupations have at least a third of tasks that could be automated, but fewer than 5 per cent of jobs can be entirely automated, the report noted. Moreover, an OECD tasks-based study, also published in 2016, found that, on average, just 9 per cent of jobs in the organization’s 21 (mostly advanced) economies are automatable.
While still troubling, the OECD’s 9 per cent and McKinsey’s 5 per cent assessments regarding elimination of jobs through automation look a lot less dramatic than Oxford’s 47 per cent and PwC’s 35 per cent forecasts.
The implication is that jobs won’t disappear, but they will be different. Recall T. S. Eliot’s Preludes from 1920:
And at the corner of the street A lonely cab-horse steams and stamps.
And then the lighting of the lamps.
London no longer has lamp-lighters; those so inclined found other ways to make a living. As for the cab-horsemen, they became taxi drivers (and, more recently, ‘partner-drivers’ for services such as Uber). Indeed, in a study of the US Census Bureau’s occupations between 1950 and 2010, only one job – elevator operator – was removed because of automation. Other jobs continue to be performed, even if the nature of that work has changed because tasks were partially automated.
Counterintuitively, automation can sometimes actually increase employment in the industries it touches. This is because the technology lowers the price of a good or service, so people consume more.
In the 19th century, around 98 per cent of the labour needed to weave a yard of cloth became automated. But the number of workers employed in weaving increased during that period. The reason for this was that the price of cloth fell dramatically because of automation, so people bought much more cloth. Individuals who might only have owned a single set of clothing now could have several. Curtains appeared at windows. Furniture was more commonly upholstered. The result was more jobs alongside more automation.
Other industries had similar experiences, notably the growth in bank representatives in the US as cash machines became more prevalent there. And as already noted, legal assistant jobs have grown even as law firms have introduced software to automate some tasks: it reduces costs and makes employees more efficient. Over the past 15 years, automation has created around four times as many jobs in the UK as it has destroyed, according to a report published by Deloitte in 2015.
Another reason why work will not disappear entirely is that there are many jobs in which human engagement is indispensable. Education, healthcare, sales and other areas in which empathy and social skills are critical. People adapt their skills to where they have comparative advantage over software code and computer chips.
For example, if AI systems can detect the propensity of certain diseases, there will be a large share of the population concerned about taking steps to lower their chances of suffering from the ticking time-bomb inside their bodies. This will mean changes in lifestyle, diet, exercise, stress levels, etc. People may be eager to have ‘health coaches’ support them, just as they have turned to therapists, nutritionists, executive coaches, birth coaches and personal trainers over the past several decades – jobs that largely did not exist a century ago. Already, for example, the profession of ‘death doulas’ has emerged.
Although robots could perform those tasks, it is doubtful that they would create the meaningful bond that most people want. Unsurprisingly, employment growth in the face of automation has been happening in jobs with substantial social skills, such as nursing, teachers and care workers, according to a study by Deloitte. One AI entrepreneur, Chris Boos of Arago in Germany, believes the day will soon arrive when nurses are paid more than doctors – i.e. emotional nous will trump analytical skills. This is a deliberate provocation, but it makes the point.
Humans still outperform machines when judgment, not just mechanized work, is called for. Today’s AI systems are burdened by needing some explicit instruction, even if newer techniques enable the algorithm to infer answers more subtly than ever. Where clear instructions are needed, the technology comes up against what economists call ‘Polanyi’s Paradox’. This is the role of tacit knowledge, summed up by the phrase ‘We know more than we can tell.’ Because many tasks cannot be clearly delineated and communicated, we will still need humans to perform them.
In sum, the fear of substantial unemployment because of AI is legitimate, but it is not clear how suddenly it may happen, or how deep it may go. A key question is timing. Will new jobs be created fast enough to replace the ones that go away? And even if jobs do still exist, the types of jobs, their quality, and the skills necessary to perform them are, critically, not fully understood. This is not reassuring for people who have trouble adapting to new job requirements or acquiring new skills. The mismatch between abilities and jobs may be considerable.
Distinct from the jobs question is the issue of wages. When the returns on capital are greater than the returns on labour, firms invest money in machines to perform tasks rather than hire staff. The only way people remain competitive therefore is to work for less. As Jason Furman, chairman of the US Council of Economic Advisers under President Obama, put it in 2016: ‘The traditional argument that we do not need to worry about the robots taking our jobs still leaves us with the worry that the only reason we will still have our jobs is because we are willing to do them for lower wages.’
Over time, wages tend to readjust upwards as the inventions that shake the labour market make their way through the economy, and public policy works to redistribute the gains. But this process can be extremely protracted. British workers, for instance, did not see substantial real wage gains during the early industrial revolution until the 1840s, some 60 years after the labour upheaval began.
A scenario for how this might play out in the AI economy can be seen in one economic trend since 2000: offshoring. In this case, jobs have not been replaced by robots but by lower-cost workers in developing countries. This resulted in lay-offs, and put downward pressure on wages in advanced economies (and arguably planted the seeds for the populist surge in recent years).
There is some initial empirical evidence of the impact of automation on jobs. A study of industrial robots in local US economies between 1990 and 2007, published in 2017, found that for every robot per thousand employees, 6.2 workers lost their jobs, and wages dipped by an average of 0.7 per cent in those communities. That said, the impact was less severe on a national level since other jobs were created elsewhere.
The experiences of offshoring and industrial robots provide a glimpse of things to come, since it shows how lower-cost alternatives to labour affect the job market. If employment goes to a machine, it is hard for a human to compete. If jobs go overseas, workers in one country lose out but those in the other gain. However, even this model of international labour markets is likely to change because of AI.
Impact on international trade and development
AI will infiltrate not just advanced economies. A similar wave is poised to hit developing countries, where the effects may be even harsher. This is because AI throws into question the traditional development model, based on low-cost labour, by which poor countries typically achieve economic growth.
The 20th century ‘catch-up’ model of development relies on international trade. Countries start by exporting low-value goods to richer countries, which no longer bother to make cheap products at home. As the lower-income country becomes more skilled, it moves up the value chain to produce higher-value goods. So a country may go from basic inputs like chemicals and steel to more skilled, manufactured products like cars and electronics, a process that normally takes decades.
Poor countries meanwhile use policy levers such as inviting in foreign capital, with conditions. For example, foreign firms looking to invest in a developing economy are required to set up joint ventures with local firms, adhere to quotas for hiring locals in skilled tasks, agree to technology transfer, etc. The local country can thus improve the skills of its workforce, get access to technology and graduate to higher-value-added work – and not simply remain the low-cost supplier of labour. It is imperative to maintain this upward direction, since there are always poorer countries vying to take on the lower-value work.
This process describes the path taken by Japan in the early 20th century, and subsequently emulated by the Asian high-growth economies of Hong Kong, Taiwan, Singapore and South Korea in the latter half of the century. It is known as the ‘flying geese’ model of development, because when the process is plotted on a graph, the series of swooning and drooping lines (that represent increasing and decreasing production over time, as the country enters and exits certain industries) resemble birds flying away.
Whereas rich countries have workers with more skills who may be able to adapt to the changes, workers in poor countries have yet to develop this resilience
But AI may mean that particular goose is cooked. Machines will replace labour for many jobs in developing countries, just as they will in advanced economies. This means the same pressure on employment and wages will hit developing countries, stalling the development model. Yet whereas rich countries have workers with more skills who may be able to adapt to the changes, workers in poor countries have yet to develop this resilience.
The development model has, moreover, relied on exports from countries with low-cost labour to richer ones. However, if the cost of manufacturing declines through automation (such as the robotic welder in car-making, to return to the example earlier in this chapter), a process of ‘in-shoring’ or ‘re-shoring’ may happen, whereby work previously sent abroad comes back. More domestic manufacturing means fewer jobs in export industries in poor countries, and less global trade.
As more work stays local, the use of migrant workers may decline in everything from construction (as in Dubai, for instance) to fruit-picking (as in California). If a host country’s reliance on migrant labour is reduced through automation, one effect may be that remittances from overseas workers to their home countries fall, deepening those poorer countries’ economic difficulties. An alternative scenario may be that if emerging markets are hit hard by AI, this may spur more migration to advanced countries. This would put even more downward pressure on unskilled wages in rich countries – and foster anti-foreigner sentiment and political tensions.
This suggests that AI may exacerbate inequalities among countries. Whoever owns the robots owns the means of production – to adapt Karl Marx to the era of AI. Yet just as rich countries may grow richer even as workers suffer lower wages, poorer countries will find it harder to find a way on to the development ladder. AI could ‘erode the comparative advantage of much of the developing world,’ warns David Autor, an economist at MIT.
A less gloomy outlook is also possible. Tyler Cowen, an economist at George Mason University, takes it as a given that wealthy countries will automate manufacturing and produce more finished goods at home, rather than buy in from poor, export-oriented countries. But he argues that ‘trickle-down’ growth is possible. Cowen makes the case that poor countries adopt the automation technology themselves from the West and forgo manufactured exports in favour of producing for domestic demand. Newer and better jobs may then crop up around these products. So, instead of being stuck making flat-panel screens for rich countries, a developing economy might use robots to lower costs and make TVs for local consumers – who then can get jobs making local content to fill the channels.
Instead of being stuck making flat-panel screens for rich countries, a developing economy might use robots to lower costs and make TVs for local consumers – who then can get jobs making local content to fill the channels
There are advantages and drawbacks with this model. As AI technology dramatically lowers the cost of goods and services, the benefit is that items become much more affordable and people’s lives are improved. The fast spread of inexpensive mobile phones in Africa is a notable example of this.
The drawback is that without manufacturing, the growth model will likely produce wide economic inequalities. This is because manufacturing-based growth requires investments in public education and infrastructure, and those improvements touch a wide segment of the population, including other businesses not directly related to the industry for which the improvements were designed. If AI means that these broad-based investments are not made, society overall will suffer even as the elites involved in AI-related businesses flourish. The backlash against the geeks looks pronounced in San Francisco; just wait until it reaches cramped cities in developing economies.
A nascent ‘geopolitics of AI’ is becoming apparent. Kai-Fu Lee, the chief executive of Sinovation Ventures, a major Chinese venture capital firm, argues that the AI lead of China and the US is now so dominant that every other nation will end up being an ‘economic dependent’ of one or other of them to access the technology – in part because countries with large populations will be saddled with mass joblessness.
Yet that view seems overly cynical. There is a good chance that AI will not substantially change the economic balance of power among nations. The asymmetries that currently exist will remain if countries adopt AI at roughly the same pace, which the case of mobile phone technology suggests is broadly possible. China may reclaim its position as a low-cost producer not through cheap workers but through its forte in bots. Africa and Latin America will continue to lag behind, not because of a shortfall in human capital or good governance but because of a paucity of data and algorithms compared with their abundance in the West.
In so far as there are divides in the global economy – in access to education, in access to the internet, in access to medicine – these will probably persist with advances in AI. However, despite an ‘AI gap’ with the West, developing countries will still be better off: AI is poised to improve standards of living, as technologies tend to do.
Preparing for the future
The economics of AI swing in opposite directions: the promise of efficiency and the peril of unemployment. Yet the debate has been dominated by anxiety over jobs. During the 2016 US presidential election and in the first year of his administration, Donald Trump stoked the worries of middle-class Americans that immigrants and free trade were stealing work. Automation has not thus far figured on his list of culprits, even though some economists would place it high up on theirs.
If the politics of economic disenfranchisement helped a populist gain power before the AI party has really even begun, just imagine the difficulties the world will experience when the economic impact of AI hits for real
If the politics of economic disenfranchisement helped a populist gain power before the AI party has really even begun, just imagine the difficulties the world will experience when the economic impact of AI hits for real. It can be expected that AI will be lambasted and used as a pretext to introduce protectionist policies if it is blamed for unemployment – even if there is a surge in output. Global trade may suffer. Domestically, AI will be branded as spurring ‘jobless growth’ that only benefits a handful of computer engineers, to the detriment of the struggling baristas, dog-walkers and yoga instructors who minister to them.
What can countries do to prepare? Three general and complementary ideas have been advanced: education, welfare and redistribution.
As regards education, there are calls for policies to encourage more targeted education, particularly in science, technology, engineering and maths. This is one of the main recommendations of a White House report on AI issued in the final weeks of the Obama administration. It is not quite clear how this will truly stave off a jobs apocalypse. The traditional subjects are also the areas in which AI is making rapid advances. So there is a case to be made for rethinking education around social and caring skills.
There is also a substantial lag time: today’s youngest students may not enter the labour force for perhaps two decades. Nevertheless, the idea is that more education cannot hurt – especially because research shows that learning in one area better equips people to acquire skills in others. This may help to overcome the skills mismatch and need for lifelong learning, which seems to be a bigger problem with AI than with other technologies.
In terms of welfare reforms, many have urged Nordic-style policies that provide benefits to individuals regardless of work status, in return for labour market liberalization that gives business freedom to manoeuvre. The model is termed ‘flexicurity’ (flexible security), and is broadly popular with individuals because it removes the sting of changing jobs or leaving the workforce temporarily. Business also supports it, since it means a more fluid labour market than if benefits are tied to the employer. If AI means constant churn in the job market, flexicurity may strike a good social balance.
On redistribution policies to reduce inequality, the most ambitious calls are for a universal basic income. The idea is to redistribute economic gains (in this case from AI) more evenly throughout society. Although the idea predates AI, it has acquired more momentum with the rise of wealthy digital businesses and in light of the potential productivity boom that AI might unleash. Small trials have been taking place in Finland and Canada since 2017. However, critics argue that, on principle, provision of universal basic income removes the incentive to work, and, in practice, is unaffordable for state coffers.
A more inventive policy to reduce inequality has been advanced by Jerry Kaplan, an AI expert at Stanford University and a Silicon Valley entrepreneur. He argues that government should offer generous tax breaks to companies based on the breadth of their share ownership, with the aim of encouraging the widest participation. By this model, more people benefit from the fruits of AI than just the geeks who build it. Furthermore, since most people lack the capital to invest, he contends that governments should let people choose where they invest their national pension contributions while they are still young, so they can share in AI’s spoils.
The advent of AI presents numerous other public policy and ethical challenges. The institution-building has begun. Well-funded foundations and academic centres have opened for business at Cambridge, Oxford, Harvard, MIT and elsewhere to bring together experts from different disciplines. An array of independent non-profit organizations are emerging as well.
Ultimately, AI will reshape the economy and society like so many other general-purpose technologies before it, from printing to electricity to computing. A parallel world is being created in which everything under the sun constantly generates and collects data; all things will continually improve their performance based on this information. Robots and algorithms will exceed human cognitive and physical performance, just as the pulley, the lever and the wheel enabled people to go beyond their muscular limits many millennia ago. Aristotle’s lyre will finally play itself. We may find ourselves better off for it in some respects, but we are certainly not prepared for it.