The Rise of Artificial Intelligence: Five Things You Should Know

Artificial Intelligence is changing the world as we know it.

Explainer Updated 26 September 2024 6 minute read

Find out five key insights from the World Summit AI that brought together tech companies, academics and start-ups in Amsterdam.

1. Some of the hype around AI is overblown

Elon Musk, CEO of SpaceX, warns that AI is a ‘fundamental risk to the existence of human civilization’ but headlines about the rise of the machines and robot takeovers are not based in reality yet. Cognitive scientist Gary Marcus explains that the method of deep learning, in which AI imitates the human brain to process data, is a bit of a misnomer:

‘Deep learning is a marketing term — it’s not really deep and it’s no substitute for deep understanding. It can only label a scene, not interpret it…Children are smarter than any deep learning or AI.’

The notion that science fiction has been misleading was a recurring sentiment among experts at the summit. As Cassie Kozyrkov, chief decision scientist at Google put it, ‘Robots are another kind of pet rock — they don’t think at all.’

‘A lot of the AI capability is not there yet,’ says Emily Taylor, associate fellow at Chatham House’s International Security department.

The days where you will have a humanoid robot who is indistinguishable from a normal person are still a long way away. It’s quite hard and possibly pointless to try and recreate a human.

Emily Taylor, Associate Fellow, International Security Programme

Should you find yourself in a robot attack, Gary Marcus has some advice to throw them off — simply close the door, climb stairs or speak in a loud room with a foreign accent so they can’t understand you.

Image 1

‘Sophia the Robot’ is seen on stage at the RISE Technology Conference in Hong Kong on 10 July 2018. Photo: Getty Images.

‘Sophia the Robot’ is seen on stage at the RISE Technology Conference in Hong Kong on 10 July 2018. Photo: Getty Images.

— ‘Sophia the Robot’ is seen on stage at the RISE Technology Conference in Hong Kong on 10 July 2018. Photo: Getty Images.

Point 2

2. Gender equality in AI has a long way to go

Only 22 per cent of all AI professionals globally are women. Considering AI contributed $2 trillion to the global economy in 2018, and could add as much as $15 trillion to the global GDP by 2030, making sure women aren’t left behind by this fast-changing economy is crucial.

Ecem Yilmazhaliloglu, diversity advocate and founder of Technoladies, explained a few reasons why women can be disadvantaged in pursuing an AI career: there are fewer female role models in the field, a lack of opportunities and girls often aren’t given an early introduction to tech.

So is the solution for diversity simply creating more opportunities for women in AI?
It’s more complicated than that, argues privacy and data protection expert Ivana Bartoletti:

AI is more than just technology and diversity is also needed where decisions about how we use AI are made.

Ivana Bartoletti, Head of Privacy and Data Ethics, Gemserv

Image 3

The Watson robot is displayed at the IBM stand at a digital technology trade fair in Hanover, Germany. Photo: Getty Images.

The Watson robot is displayed at the IBM stand at a digital technology trade fair in Hanover, Germany. Photo: Getty Images.

— The Watson robot is displayed at the IBM stand at a digital technology trade fair in Hanover, Germany. Photo: Getty Images.

Point 3

3. Women are leading in the ethics of AI

A recent example from Austria highlights this bias problem when an employment agency used an algorithm that discriminates against women. According to the NGO AlgorithmWatch, a female candidate was more likely to be given a lower score than a male candidate, even if she had the same qualifications and experience.

Technologist Kriti Sharma stressed the importance of diverse teams in a recent interview with Chatham House:

If it’s a homogenous group of people building the technology, it’s quite difficult to have that unbiased mindset. Bringing subject matter experts, anthropologists, people who understand society, ethicists and lawyers into that process is important.

Kriti Sharma, Founder, AI for Good UK

Should people have a right to a ‘human in the loop’ when computers and algorithms make important decisions about their lives? This is the next major debate in AI ethics and there will likely be a court case about this in the near future, says Ivana Bartoletti.

There is a gender imbalance in the field of AI, but women like Ivana Bartoletti, as well as Safiya Noble, author of Algorithms of Oppression and Emily Taylor, editor of the Journal of Cyber Policy, are leading conversations about its ethical challenges.

Image 4

An Alibaba employee demonstrates ‘Smile to Pay’, an automatic payment system that authorizes payment via facial recognition, at the Alibaba booth during CES 2017 on 5 January 2017 in Las Vegas, Nevada. Photo: Getty Images.

An Alibaba employee demonstrates ‘Smile to Pay’, an automatic payment system that authorizes payment via facial recognition, at the Alibaba booth during CES 2017 on 5 January 2017 in Las Vegas, Nevada. Photo: Getty Images.

— An Alibaba employee demonstrates ‘Smile to Pay’, an automatic payment system that authorizes payment via facial recognition, at the Alibaba booth during CES 2017 on 5 January 2017 in Las Vegas, Nevada. Photo: Getty Images.

Point 4

4. Autonomous weapons bring grave consequences for human rights

Stuart Russell, associate fellow at Chatham House and professor of computer science at Berkeley, showed a short film created by campaigners to illustrate the dangers of autonomous weapons that can kill human targets without supervision:

‘I’ve worked in AI for more than 35 years. Its potential to benefit humanity is enormous, even in defence, but allowing machines to choose to kill humans will be devastating to our security and freedom.’

According to a 2018 Chatham House report on AI and international affairs, engineers have not been able to develop the technology needed for military robots to employ reason in high-stakes situations. This is because human reason is still very difficult for computers to replicate.

Autonomous weapons of mass destruction have the potential to be far worse than nuclear weapons, warns Russell: ‘We should have been worrying about this 10 years ago.’

Russell told the audience about a Turkish company manufacturing weaponized drones with facial recognition and tracking to be used against Kurdish forces in northern Syria. ‘This film is more than just speculation. It shows the result of integrating technologies that we already have.’

‘We have an opportunity to prevent the future you just saw — but the window to act is closing fast.’

Image 5

A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square in London on 23 April 2013. Photo: Getty Images.

A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square in London on 23 April 2013. Photo: Getty Images.

— A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square in London on 23 April 2013. Photo: Getty Images.

Point 5

5. Ultimately, the future of AI is up to us

AI can help solve climate change, find a cure for cancer, understand the human brain and explore space. But these breakthroughs are a long way away. As neuroscientist Gary Marcus explains, current AI systems only understand statistics and not the real world.

AI has been created by humans so humans can decide how AI is used. ‘As we build tools that scale and reach more people, we must be careful,’ warns Google’s Cassie Kozyrkov. ‘The peril and the promise of AI is you don’t need to think as much.’

Stuart Russell asks,

How can AI advance the quality of human experience when it doesn’t know what that experience is? When you create super intelligent machinery that pursues an incorrect objective, you lose and they win.

Stuart Russell, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California

While commercial organizations develop new and exciting AI technology — think driverless cars and drones that can deliver packages — this rapid development can be a double-edged sword. According to research, governments seeing their best and brightest engineers move to the commercial sphere could lead to unsafe and compromised autonomous systems.

Gary Marcus is also cautious about relying on businesses to drive the future of AI: ‘The business world’s aims are not coordinated with what we want, rather they are driven by quarterly goals. We need government involvement to emphasize the kinds of AI the corporate world do not.’

We can’t put AI back in Pandora’s box — it’s here to stay.

Gary Marcus, CEO and Founder, Robust.AI

Image 6

A woman touches a robotic hand produced by the Syntouch company during the Amazon Re:MARS conference on robotics and AI in Las Vegas, Nevada on 5 June 2019. Photo: Getty Images.

A woman touches a robotic hand produced by the Syntouch company during the Amazon Re:MARS conference on robotics and AI in Las Vegas, Nevada on 5 June 2019. Photo: Getty Images.

— A woman touches a robotic hand produced by the Syntouch company during the Amazon Re:MARS conference on robotics and AI in Las Vegas, Nevada on 5 June 2019. Photo: Getty Images.