
2. AI and Healthcare
What is AI?
Artificial intelligence for health includes ML, natural language processing (NLP), speech recognition (text-to-speech and speech-to-text), image recognition and machine vision, expert systems (a computer system that emulates the decision-making ability of a human expert), robotics, and systems for planning, scheduling and optimization.
ML is a core component of AI that provides systems with the ability to automatically learn and improve without being explicitly programmed. In fact, there cannot be AI without ML. Computer programmes access data and use it with the aim of learning without human intervention or assistance, and adjust actions accordingly (Expert System, 2017). Deep learning (DL), a type of ML, is inspired by the human brain, and uses multi-layered neural networks to find complex patterns and relationships in large datasets that traditional ML may miss (Health Nucleus, undated).
NLP is a subfield of AI that helps computers understand, interpret and manipulate human language. It draws from many disciplines, including computer science, linguistics, information engineering and computational linguistics, in pursuit of filling the gap between human communication and computer understanding (SAS, undated).
Speech recognition is the ability of a machine or programme (a mix of software and hardware) to identify words and phrases in spoken language and convert them to a machine-readable format, and vice versa. It is also known as automatic voice recognition (AVR) or voice-to-text.
The promise of AI for healthcare
The World Economic Forum has proposed four ways in which AI can make healthcare more efficient and affordable: enabling tailored treatment plans that will improve patient outcomes, and therefore reduce the cost associated with complications arising from treatment; permitting better and earlier diagnosis that reduces human error; enabling accelerated drug development; and empowering patients to take a more active role in managing their health (World Economic Forum, 2018). One of AI’s main attractions is the potential savings it could bring to the healthcare sector. According to a study by Accenture, when combined, key clinical AI applications could create $150 billion in annual savings for the US healthcare economy by 2026. AI can help minimize preventable and rectifiable system inefficiencies (such as over-treatment, improper care delivery or, indeed, care delivery failures), ensuring substantially more streamlined and cost-effective health ecosystems (Accenture, 2017).
A further benefit of the application of AI to healthcare settings would be the liberation of health workers from hours of mundane data work. They would thus be able to focus more on patient care, leaving to technology the task of examining and analysing clinical data. This, for example, would allow healthcare practitioners to assess patients with greater precision, which in turn would translate into faster and more accurate diagnoses. AI can provide a diagnosis that would have taken a doctor (or a team of doctors) many hours to reach. It can process a huge volume of medical images and scans in a fraction of the time that the same task would take a human expert. In this respect, AI is already revolutionizing the field of radiology by improving workflows, diagnostic and imaging assistance.
Likewise, the use of AI for administrative purposes will free up resources that can be focused on delivery of care, the creation of new drugs and therapies, and the conducting of research to eradicate diseases. Doctors, nurses and other healthcare workers will be relieved of laborious tasks that contribute to burnout, thereby also reducing human errors in the practice of medicine (Ash, Petro and Rab, 2019). NLP, for example, is used to analyse unstructured clinical notes, prepare reports and transcribe interactions with patients. Robotic process automation (which in fact involves computer programmes hosted on servers, rather than robots) is used for repetitive tasks such as prior authorization (required by some health insurance schemes), updating patient records or billing (Davenport and Kalakota, 2019).
Robotics outcomes include a 21 per cent reduction in length of stay. The value of robotic solutions will increase further as their development and use progresses to a greater diversity of surgeries.
At least for high-income countries, one of several AI applications that has garnered significant interest is robot-assisted surgery. ‘Cognitive robotics’ can integrate information from pre-operative medical records with real-time operating metrics to physically guide and enhance the physician’s instrument precision. The technology incorporates data from actual surgical experiences to inform new, improved techniques and insights. Robotics outcomes include a 21 per cent reduction in length of stay (Accenture, 2017). The value of robotic solutions will increase further as their development and use progresses to a greater diversity of surgeries.
According to Accenture, the benefits from AI accrue incrementally, from automated operations, precision surgery and preventive intervention (thanks to predictive diagnostics), and within a decade they are expected to fundamentally reshape the healthcare landscape (Forbes, 2019).
Low- and middle-income countries (LMICs) will, it is hoped, in time have access to costly and highly sophisticated AI applications such as robot-assisted surgery. Currently, healthcare systems in low-resource settings are dealing with shortages of workers, medical equipment and other infrastructure. But AI tools could optimize existing resources and help overcome workforce resource shortages, while also greatly improving healthcare delivery and outcomes in ways never previously imagined (USAID, 2019). The greatest near-term value of AI in LMICs is considered by some to be in squeezing more value out of available data through ML. Some key applications of AI for health in LMICs are expected to increase access to healthcare as well as enhancing its quality. Such programmes focus on monitoring and assessing population health, and targeting public health interventions to better effect; enabling frontline health workers – including community health workers – to better serve their patients, using AI-powered tools such as mobile phone apps; developing virtual ‘health assistants’ that are able to coach patients in managing their conditions or to advise them when to seek care; and developing tools to help doctors diagnose and treat their patients (ibid.).
Through the use of data science (a multidisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data), AI is being used in a diverse set of therapy areas, including wellness and lifestyle management, diagnostics, wearables and virtual assistants; it is also being used for disease surveillance to predict, model and slow the spread of disease in epidemic situations, including in resource-poor settings (ibid.).
A recent example is an ML tool that has been used in the Philippines to identify weather and land-use patterns associated with the transmission of dengue fever, a mosquito-borne disease that has spread rapidly around the globe in recent years (Wahl et al., 2018). The AI machine, produced by AIME (Artificial Intelligence in Medical Epidemiology, a US-based company), is able to predict dengue occurrence with increasing accuracy. AIME’s technology has been deployed in Rio de Janeiro, Singapore, the Dominican Republic and two states in Malaysia. The platform provides users with three months’ advance notice of the exact geolocation and date of the next dengue outbreak. Its customized analytics platform also makes sense of its users’ public health data and provides time charts, historic mapping of diseases and ‘rumour reports’ from social media (World Wide Web Foundation, 2017).
At a policy level, these new sciences offer the possibility of supporting health policy decision-making, a better integration of healthcare with other sectors, and substantial time and efficiency savings in undertaking research and driving quality improvement initiatives.
At a policy level, these new sciences offer the possibility of supporting health policy decision-making, a better integration of healthcare with other sectors, and substantial time and efficiency savings in undertaking research and driving quality improvement initiatives (Colclough et al., 2018). At a time when transformation in health systems is increasingly needed to deal with the new challenges of a growing, ageing population that suffers from a number of medical conditions, the use of AI to process the datasets associated with these cases promises to be invaluable.
It is precisely AI’s ability to carry out speedy processing and analysis of datasets that is one of its key strengths. With more countries perfecting the use of health informatics and electronic medical records (EMR), AI will become increasingly useful. In India, 30–60 per cent of the population have declared that they would want their health data to be shared to improve care delivery, to permit research to be conducted and to inform health planning (ibid.). In Kenya, an open EMR platform has contributed to improving child and maternal health and HIV/AIDS treatments in rural areas by helping to achieve a more complete data collection. The cloud-based EMR system was used in western Kenya in 2013. Results of a study showed that implementation of the system resulted in a 42.9 per cent improvement in the completeness of data (including screening for hypertension, tuberculosis, malaria, HIV/AIDS status, or antiretroviral therapy (ART) status of HIV-positive women) (Haskew et al., 2015).
The use of NLP technologies allows machines to identify key words and phrases, and enables them to determine the meaning of text. NLP algorithms are used, for example, to simplify clinical documentation and enable voice-to-text dictation. These technologies are in increasing demand by healthcare providers who are challenged by electronic health record (EHR)1 overload, as they allow them to interact with patients and produce accurate records of consultations without having to type at the same time. Both Google and Amazon are exploring, respectively, how to turn Google Home and Alexa, their popular ambient home computing devices, into innovative healthcare ‘helpers’. In May 2018, for instance, there were reports that Amazon was planning to leverage Alexa for chronic disease management and home care (Health IT Analytics, undated). NLP is also being used to guide cancer treatments in low-resource settings, including in Thailand, China and India, where AI mines the medical literature and patient records – including doctors’ notes and lab results – to provide treatment advice (Wahl et al., 2018).
As health-focused IT tools such as NLP become more advanced, the potential of using them to improve the care continuum can only become greater. In resource-poor environments, AI and its complementary technologies could help to overcome hurdles in healthcare systems. High levels of mobile phone penetration, developments in cloud computing, substantial investments in digitizing health information and the introduction of mobile health (mHealth) applications are providing a wide range of opportunities for AI to improve individual and population health.
Ethical, legal and other challenges in the use of AI in healthcare
AI applications, along with technologies such as big data and robotics, are expected to have transformational and disruptive potential within the healthcare sector – across various areas such as hospitals and hospital management, pharmaceuticals, mental health and well-being, insurance, and predictive and preventive medicine. However, these applications introduce new risks and challenges that will require policy and institutional frameworks to guide AI design and use. This paper focuses mostly on challenges at the individual level.
With the increasing availability of health-related data, and the use of AI to analyse such data for medical purposes, ethical, technical and resource-related questions will need to be answered. There are quality, safety, governance, privacy, consent and ownership challenges that are still under-addressed. There is also concern among those examining AI design and use that there is a need for humans to understand why and how AI arrived at a specific decision. The processes AI follows, and the speed with which it deals with large amounts of information, are beyond human perception. Many of the algorithms created by ML cannot be easily examined, and it is impossible to understand specifically how and why AI arrived at a specific conclusion. A lack of explainability and trust in AI processes hampers the ability to fully trust AI systems (Schmelzer, 2019).
In LMICs, some of the challenges of integrating AI into healthcare systems relate to the hurdles of scaling digital health technologies. Other challenges are linked to the fact that ‘[…] LMIC governments lack the resources and technological capabilities to create consistent policies on population health, such as disease burden analysis and monitoring and treatment protocols for use, across their various regions or states. This creates a barrier for AI tools for population health to scale at a national level.’ (USAID, 2019) In terms of quality, AI requires high-quality data in order to produce coherent results. In low-resource settings, this is not always available. A strong digital health infrastructure is required to operate AI tools. Low EMR adoption rates (estimated at less than 40 per cent in LMICs) constitute one of the barriers to feeding AI machines with the necessary historic and real-time patient data (World Bank, 2019; USAID, 2019). Even in high-income countries, the quality of data is a factor determining the speed at which AI tools are put into use. The average UK hospital, for instance, has hundreds of different systems that are not integrated with each other. There is a need for ‘an interconnected data infrastructure with fast, reliable and secure interfaces, international standards for data exchange as well as medical terminologies that define unambiguous vocabularies for the communication of medical information’ (Lehne et al., 2019).
Protection of citizens’ health data is a key area of responsibility for those handling sensitive data for AI purposes. Healthcare organizations will have to respond to growing cybersecurity challenges, and policymakers will have the responsibility of enacting laws that ensure careful governance and security arrangements for stored data. For example, Google DeepMind’s partnership with the Royal Free London NHS (National Health Service) Foundation Trust was severely criticized in 2017 for inappropriate sharing of confidential patient data and their use in an app called Streams that was designed to alert, diagnose and detect acute kidney injury. The Royal Free failed to comply with the UK’s Data Protection Act when it handed over the personal data of 1.6 million patients to DeepMind. The ruling of the Information Commissioner’s Office (the UK’s independent authority set up to uphold information rights in the public interest, promote openness by public bodies and data privacy for individuals) was based largely on the facts that the app continued to undergo testing after patient data were transferred, and that patients were not adequately informed that their data would be used as part of the test (Information Commissioner’s Office, undated; Hern, 2017).
Such instances demonstrate the challenges in developing ethical and legal frameworks for data sharing, interoperability of systems, and the ownership of software produced from such partnerships, as well as the legal framework for clinical responsibility when errors occur (The Lancet, 2017).
Privacy concerns are also a critical consideration for the use of data. Health data are most often owned by governments, who could be tempted to sell such data on to private companies. In many cases the users can become the ‘product’ (in effect, patients’ data become monetizable). For example, in the US, the Walgreens pharmacy chain collects data contained in prescriptions and sends out mailshots about clinical trials related to the customer’s illness. For this service, Walgreens is paid a fee by those recruiting patients for clinical trials and by pharmaceutical companies. Kalev Leetaru, writing in Forbes magazine, asserts that: ‘[…] Walgreens does not explicitly inform customers at purchase time that their prescription may be used to target them for medical trials and offer them the ability to opt-out of having their private medical information used in such a manner […]’ (Leetaru, 2018). If companies such as Walgreens are able to do this, then it could be the case that technology companies that gather patient information could also sell individuals’ sensitive health data to third parties.
There are further ethical considerations. What obligations do technology companies have to alert populations if their AI produces results that reveal society-wide concerns, such as a potential outbreak of a highly contagious infectious disease? Even if technology companies using AI for health purposes report their findings to governments, history has shown that governments can downplay health risks or fail to alert citizens when economic interests are involved. For example, fears over social and economic stability, as well as the political structure involved in alerting of a disease outbreak, led Chinese leaders to delay reporting the outbreak of Severe Acute Respiratory Syndrome (SARS) in 2003 (Huang, 2004).
Governance is challenging in this realm. Health, technology and data protection policies differ greatly across countries and regions, with many LMIC governments lacking the resources and technological capabilities to create consistent policies on population health. At the same time, many of these countries also lack regulations on the use of data and technology that are intrinsic to AI development.
Accuracy must also be considered. A recent report by the UK Information Commissioner’s Office highlights the implications around accuracy of personal data during collection, analysis and application. For example, the results of data analysis may not be representative of the wider population, and hidden biases in datasets can lead to inaccurate predictions about individuals (Information Commissioner’s Office, 2017). Responsibilities are also not clearly defined. Considering the intricate processes involved in AI-produced results, from data collection to algorithm creation and use, how should a government or regulatory system understand who is responsible for flawed AI-derived recommendations?
Algorithms inevitably reflect the bias of training data, and AI tools tend to show a bias reflecting conditions in the high-income countries where they are developed. This is because the algorithms require millions of historical health datapoints, which are often missing in low-resource settings, to provide accurate outputs appropriate to the geography and population (USAID, 2019). Questions about how the AI’s algorithms were designed, and with which inputs, remain to be answered as they are central to the questions of their overall utility and of whether they are appropriate for high-, low- and middle-income settings. A recent study by Facebook’s AI Lab demonstrates this hidden bias. Five off-the-shelf object recognition algorithms (Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition, and IBM Watson) were asked to identify household items collected from a global dataset. ‘[The] object recognition algorithms made around 10 per cent more errors when asked to identify items from a household with a $50 monthly income compared to those from a household making more than $3,500. The absolute difference in accuracy was even greater: the algorithms were 15 to 20 percent better at identifying items from the US compared to items from Somalia and Burkina Faso.’ (Vincent, 2019)
Governments – as well as businesses and non-profit organizations developing AI solutions – also need to consider business model sustainability. This will be a challenge in low-resource contexts, where many of the key actors will not have the financial means to purchase these tools.
Health-related AI applications will require strong infrastructural, legal and ethical frameworks. Government-led initiatives to develop and introduce health-related AI applications, across high-, low- and middle-income settings, need to consider these issues. Governments – as well as businesses and non-profit organizations developing AI solutions – also need to consider business model sustainability. This will be a challenge in low-resource contexts, where many of the key actors will not have the financial means to purchase these tools. As one private insurance company representative in East Africa noted: ‘I absolutely see the value of AI risk management tools and I realize that this would save us money, but I do not have the budget to buy something now which will save me money 12 months down the line.’ (USAID, 2019) This ‘applies to many LMIC governments that understand the value of these AI tools, but do not have the resources to buy them, or the human resources or internal IT capabilities to implement them’ (ibid.).
Equity issues do not just apply from country to country, but also arise out of the so-called ‘digital divide’, where different parts of the same society have differing levels of access to advanced technologies such as 4G networks and smartphones. AI tools for health that are enabled by mobile phone technology are only one example of how more connected populations and patients will benefit from services such as medical advice and information through devices to which poorer populations may not have access.
Governments engaging with integrating AI tools into healthcare systems will need to take into consideration not just ethical and legal issues (such as privacy, confidentiality, data security, ownership and informed consent) but also fairness, if AI and related technologies are to contribute to achieving the health-related Sustainable Development Goals (SDG) targets. Ubenwa, an AI application under development in Nigeria, aims to address SDG 3.2 (by 2030, end preventable deaths of newborns and children under five years of age) by providing existing diagnostics that are 95 per cent cheaper than existing clinical software. The AI used is a ML system that can take an infant’s cry as input and analyse the amplitude and frequency patterns in the cry to provide an instant diagnosis of birth asphyxia. The test results from Ubenwa’s diagnostic software have shown a sensitivity of more than 86 per cent and specificity of 89 per cent. The algorithm has been used in a mobile app that harnesses the processing capabilities of smartphones to provide near-instantaneous assessment of whether or not a newborn has or is at risk of asphyxia (Louise, 2018). Not only is Ubenwa cheaper, and therefore more easily available in low-resource settings; it is also non-invasive (Ubenwa.ai.). Technology trajectories and their impacts will be shaped by local socio-economic contexts, and thus will not be the same everywhere.
India provides a relevant and useful case study to contextualize some of these issues. The government of India recently released its AI strategy, and healthcare is a priority sector for its application in India (Niti Aayog, 2018a). The government seeks to position India as a ‘garage’ for developing AI solutions for the rest of the world. Many of challenges facing India – from the type of diseases to the quality of the health infrastructure – are shared by a number of other developing economies.