A central concern surrounding AI is how it might affect the labour market. In recent years, technology that relies on automation has become more advanced, and its application is increasing across a range of different business settings. Does it pose a threat to business and what are the wider implications for society?
AI has wide-ranging implications and not just in the places you might first expect. However, it’s not necessarily true that AI will destabilize whole areas of the work force as it pertains to certain kinds of workers in the economy.
The canonical example is truck driving, where the popular notion promulgated is that, because of the advance of autonomous vehicles, truck driving will be an obsolete career in years to come. That may or may not be true. However, consider another large segment of the US labour market: care workers. It’s not necessarily at all true that their jobs will be taken by automation, because the requirements of a care worker are very different in scope and nature to those of a truck driver.
I think we need to look at these two examples of labour and ask questions about the sociology of the society in which we live. Again, looking at truck driving, upwards of 90 per cent of truck drivers in the US are men, and if you think about what fractures in that part of the labour market will look like, it’s obvious that it will happen among a very specific demographic.
Simultaneously, in the next 10 to 20 years, one of the most popular and available jobs in the US will be in elder care work and I don’t think you need to be a rocket scientist to understand that most of those truck drivers are probably not going to easily transition into elderly care work of their own volition.
And so, the salient question for policymakers – and again, that’s only one example but it could be extrapolated to a lot of different economic settings – becomes how we make sure that available work is matched equally to available skills, and how we anticipate these kinds of shifts in the labour market.
One analogy Elon Musk has used in relation to AI policymaking is the story of the seatbelt. In spite of a rising death toll in the US from car accidents throughout the 1950s and 1960s, it took the government 10 years to pass a federal law enforcing car manufacturers to include seatbelts in their cars. Musk’s argument is that we don’t have 10 years with AI. Is it too late to begin effectively regulating AI?
I think the good news is that it’s not too late… yet. But it is going to take a lot of work to get us to the state of structural preparedness that we need in order to meet the pace of technological development and its impacts on society.
It’s critical that we all get really clear about involving the voices of traditionally underrepresented communities and individuals. I don’t know if that was what was missing from the seatbelt debate, but certainly in AI there are a lot of application areas of this technology now being deployed in domains where decisions made by people or machines using data out of context, are producing and promoting negative outcomes for populations at risk.
That point has a lot of urgency. It may already be too late in some context, but it certainly isn’t in others, we need to be thinking carefully as a global community about how those interests are represented.
How do governments ensure they are up to the task?
One part of the solution is certainly better equipping government and policymaking infrastructure to deal with the pace of change that is to come.
Speaking from my own experience of having worked as a policymaker in the US, it’s extremely evident that there’s a huge dearth of technical expertise in the policymaking community. It’s not just an American issue, it’s an issue that blights almost every nation. The real question is how you involve more multidisciplinary expertise and more technical expertise in decision-making architectures.
Our conversation so far has been framed around the perception and possible consequences of AI for the West. Are other parts of the world more positive about these developments or is it a similar story of concern?
I think it varies. [My organization, Partnership on AI] works with organizations in South Korea, Japan, China, India and countries in Europe and North America, and in terms of how societies feel about technology development it really does depend on cultural context. For example, we were just talking about the labour market – in Japan there is a much more significant embrace of automation. Automation is seen as a positive intervention in many ways, because they have an ageing demography that really demands and requires these automated tools for support.
Meaning there are gaps in the labour market that they’re struggling to fill with human labour?
Exactly. So how do you fill those gaps? Automation might be one piece of the puzzle for an economy like Japan. When you turn towards the US and Europe it might be a very different picture, so a lot of it does depend on the political, demographic and cultural context in which the conversation is situated.
One thing we talk about a lot at Partnership on AI is the importance of narrative and storytelling, so the importance of promoting action among the most powerful players in the industry, many of whom are in the research sector and part of our community. It’s important to think about what introducing diverse narratives might do to promote different outcomes.
There’s a popular saying in the science fiction world related to science fiction becoming science fact – because a lot of science fiction authors or screenwriters for movies have a huge power in promoting ideas in the public narrative about what the art of the possible is. That also differs by cultural context as well and I think it’s really important that we think about those questions as a global community.
You’ve made it very clear that Partnership for AI is deeply involved in the sociology of AI. Is there a more positive story to be told?
Our work was founded on the premise that tremendous good can be brought from this technology if we work hard enough to make that so. I am hopeful that we can get there.
I think some specific examples that I think are within the grasp of reality right now include applications in the healthcare space, where unfortunately unintended human error results in a pretty high death rate in clinical settings annually in a lot of countries in the world. I think teaming machines with humans to help reduce that rate of error can be really powerful in producing better medical outcomes for people.
Likewise, in climate justice, there are some extraordinary applications being developed using the incredible amounts of data we have about the world, which we don’t necessarily know how to effectively process. In the context of climate change, that can become a really effective tool in helping us promote better outcomes for the environment.
As with any area of technology there are really, really good things and potentially negative, harmful things that could come from what is a very complicated constellation of different tools and techniques.
Deployment of these tools in certain circumstances may be appropriate, and in others it may not, and we need to think carefully about what that landscape looks like as we’re on that journey.
One connection I’ve made from our conversation is that like climate change, AI is presenting significant existential questions for humans and an awful lot hinges on us getting this right.
Yes! Another thing I like to say is that ‘technology is not destiny’ and we really do hold the power as people to shape outcomes. It’s the responsibility of policymakers, industry specialists and researchers to step up and work together on this.