1. Introduction
Artificial intelligence (AI), the technology that has captivated multiple sectors, is being hailed as a tool that will help provide access for all to quality medical care, including through the development and improvement of diagnostics, personalized medical care, the prevention of illnesses and the discovery of new treatments. Within the next five years, the use of AI in medicine is expected to increase tenfold (Perry, 2016).
AI can be defined as the use of coded computer software routines (algorithms) with specific instructions to perform tasks for which a human brain is normally considered necessary. Such software can help people understand and process language, recognize sounds, identify objects and make use of learning patterns to solve problems. Machine learning (ML) is a way of continuously refining an algorithm. The refinement process involves the use of large amounts of data and is done automatically, allowing the algorithm to change with the aim of improving the precision of the artificial intelligence (Zandi, 2019). Put simply, AI enables computers to model intelligent behaviour with minimal human intervention, and has been shown to outperform human beings at specific tasks. In 2017, for instance, it was reported that deep neural networks (a branch of AI) had been used successfully to analyse skin cancer images with greater accuracy than a dermatologist, and to diagnose diabetic retinopathy (DR) from retinal images (The Lancet, 2017).
However, the definition of AI is evolving. As well as the more technical definition given above, AI is also perceived as something resembling human intelligence, aspiring to exceed the capabilities of any of the individual technologies. It is conceived as a technology interaction that gives a machine the ability to fulfil a function that ‘feels’ human. The ability of a machine to perform any task that can be achieved by a human has been termed Artificial General Intelligence (AGI). AGI systems are designed with the human brain as a reference. However, AGI has not yet been achieved; experts recently forecast its emergence by 2060 (Joshi, 2019).
The case for examining the potential opportunities and risks of implementing AI systems for healthcare purposes has been given new importance by the outbreak of the COVID-19 virus, which plunged the world into a public health crisis of unprecedented proportions from early 2020. AI systems could help overburdened health administrations to plan and rationalize resources, and to predict new COVID-19 hotspots and transmission trends, as well as provide a critical tool in the search for drug treatments or vaccines. However, as governments around the world scramble to adopt technological solutions (many of which rely on ML systems) to help them contain and mitigate the crisis, questions around the ethics and governance of AI are arising with equal urgency. There are already growing concerns about how the current crisis is going to expand governments’ surveillance capacities, as well as accentuating the power and influence of so-called ‘big tech’ companies. These concerns are particularly acute for developing countries such as India. In such countries, weak public health infrastructure is increasing the appeal of AI-based solutions, even while the normative and regulatory frameworks required to steer AI trajectories are weak and underdeveloped.
This paper describes some of the main opportunities and challenges of using AI in healthcare. It then turns to a case study of the use of AI for healthcare purposes in India, discussing key applications, challenges and risks in this context.