What examples are there of AI helping to solve some of the world’s problems?
A couple of examples that I find motivating are areas where AI is being used to provide healthcare, where human support might be quite restrictive or not enough. AI can also improve accuracy rates to help more people in a safer manner.
Another example is a product we’ve developed at AI for Good called rAInbow, where we’ve built an AI tool to help detect early signs of domestic violence and abuse. It’s an issue that affects one in three women, whether it’s physical, sexual, psychological or financial abuse.
It’s one example where AI can help when a human might not be the most suitable to talk to in the very early stages, especially for an issue like domestic violence which is surrounded by shame, stigma, embarrassment and judgement. We launched it in South Africa in November last year and since then we’ve had over 350,000 consultations from women who want to get an understanding of early signs of abuse in a non-judgmental and unbiased way.
It’s not the answer and it’s not a replacement for human support services. In fact, we designed the product working with people from different areas including those providing face-to-face support to people in need. AI can play a role in helping people take that first step and then signposting them to the right services. That way people can be more preventive and proactive rather than reactive.
Algorithmic bias in decision-making, such as in mortgage, loan and job applications, has been known to discriminate against women and people from ethnic minority backgrounds. What can be done to counter algorithmic bias and ensure these decisions are made fairly?
There are three things to think about. One: where is the data coming from? A lot of the time the bias is there because the underlying data is biased. It’s about being proactive in looking at additional datasets and identifying biases early on.
Second is working with people from different communities. If it’s a homogenous group of people building the technology, it’s quite difficult to have that unbiased mindset. Bringing subject matter experts, anthropologists, people who understand society, ethicists and lawyers into that process is important.
A third area is how to interpret the insights from the algorithm. A system might pump out results but then it’s up to us as humans to see how to use that information. It’s not just about building the algorithm; it’s about how to use the insights. Do you use them in a purely automated way?
At AI for Good, we’re doing an interesting project on reproductive and sexual health in India. We put in a lot of human intervention as it’s a very sensitive issue, where bias, including human provider bias, is a big problem. Bringing judgement that machines might not always have is important and you can bring humans in at the right moment and the right time in the design process.
How should education systems evolve to meet the societal changes brought by AI?
I can tell you from experience. We run a programme called FutureMakers, where we train young people between the ages of 13 and 17 in creating AI technologies. These are young people who will be going into the workforce and doing jobs that haven’t even been created yet. So, we need to prepare them for that.
We saw that there were three things stopping them from getting started in a tech career. The first was feeling they weren’t smart enough to work in the field. It’s very much an issue of who people see as role models.
Second, they didn’t know where to begin. Often in schools they don’t have the right framework or the right curriculum, which is always a challenge. Sometimes the kids are more digitally active and mature than the teachers, which creates a strange dynamic.
The last reason they gave us for not being interested is they thought they would rather do something more creative. Working in this field is actually quite creative – it’s not just about sitting at your desk and writing code all day. There are different kinds of roles available.
So, we collaborated with Sage Foundation to teach young people of diverse backgrounds AI skills, giving them access to AI technology and helping them use their imaginations to build their solutions. Young people are great at creativity, problem-solving, empathy and emotional intelligence. We found that nine times out of ten, they built tools that had positive social purpose.
The fear went away because now they can touch and feel it, they know what AI is and what it can do instead of this abstract notion they read about or see in Hollywood films. Some of the examples of products they built are solutions for climate change and solutions to help combat loneliness for older people who live alone.
One of the girls I worked with built a camera app for her grandmother who was visually impaired – the AI camera spoke to her about what was going on around her so she could navigate the world better. I think that’s powerful because when you give these tools to young people, they can create better uses for this technology than making people click more ads.
My request for policymakers is to think about the experience of young people and their teachers and recognize they deserve the best technologies and to benefit from this revolution. It shouldn’t just be people like myself who have been through the traditional process of getting a computer science degree who are benefitting from it. It should be everybody.
How can governments prepare for the rise in automation without leaving people behind and exacerbating inequality? How do they make sure that all of society benefits from AI advancements?
There are new jobs that are going to be created and a diverse group of people should be brought into the world of AI. The new jobs should be an opportunity for people of different backgrounds to embrace.
One point that has been bothering me is the modern-day sweatshop version of jobs being created by the AI industry. Not every job in the AI world is that of a glamorous data scientist or machine learning engineer. There are a huge number of jobs being outsourced that include data labeling, where a lot of the work is preparing datasets.
You see examples of humans being exposed to harmful content day in and day out, just tagging data. I do not think that’s a good, sustainable supply chain practice. People need to be paid fairly for this work and they deserve to work in safe conditions. For example, humans who are asked to moderate and tag content so that algorithms can get better at detecting hate speech - these humans are being exposed to hate speech constantly and people need to think about that.
Another thing is the impact of automation on the worker’s side. There might be benefits of using technology to reduce their workload but what does it mean to the people whose jobs are being automated?
I’ll give an example of something that happened in one of my projects where we automated parts of human tasks. We thought this was the promise of technology - humans do the more interesting, intelligent and challenging things and the machines do the boring, mundane stuff.
We found that it led to the human workers having to do challenging work all the time. Previously, 80 per cent of their job was mundane tasks, which was now taken over by the machine, and only 20 per cent was difficult tasks. And now, 100 per cent of their job is difficult, intense work that requires thinking.
This means technology must be designed for people and their experiences. If they’re doing work that requires more intellect, should they be compensated more? Should they get more breaks during the day? Should they get different kinds of training? These are all the questions I have when it comes to the benefits towards people and towards employees.