Professor Noel Sharkey presents the case that robotic AI platforms, which are increasingly replacing human decision-makers, are often inherently racist and sexist.
Noel Sharkey, Professor of AI and Robotics and Professor of Public Engagement, University of Sheffield; Co-director, Foundation for Responsible Robotics
Artificial Intelligence (AI) is increasingly being used in workplaces across the world. AI is already being utilized in decision-making over bail requests, prison sentencing and predictive policing, with the next step likely to be automated targeting in armed conflict. Beyond the increases in efficiency and the reduction of labour costs that such innovations bring, a reason often proffered in support of using AI is that such systems are neutral and objective: untainted by the conscious and subconscious biases of human beings. Yet a number of recent news stories have exposed the limitations of this claim - the soap dispensers that only dispenses for white hands, a robot scanning Google images falsely identifying men in the kitchen as women and white male university applicants still being chosen above any other groups when applications are automatically scanned for suitability for admittance. If AI is indeed objective, why are systems involving AI sometimes producing results that exacerbate structural biases and societal power differences?
Professor Noel Sharkey presents the case that robotic AI platforms, which are increasingly replacing human decision-makers, are often inherently racist and sexist. This, he argues, is in part due to the nature of machine-learning - which aggravates pre-existing societal biases - and also partly a result of an overwhelmingly male technology and programming sector. If so, what does this mean for the future of technology and our relationship with robots? And how can programming and policy adapt to this? He will outline his case for why deliberative human reasoning should be obligatory for decisions that have a significant impact on people’s lives.