
3. AI in Healthcare in India: Applications, Challenges and Risks
There are huge challenges for health systems in India in terms of quality, accessibility, affordability and inequity. On the one hand, India is home to some of the best hospitals in the world, contributing to a growing medical tourism sector (Indo-Asian News Service, 2017). On the other, there is an acute shortage of qualified medical professionals: the ratio of available doctors to population (assuming an availability rate of 80 per cent) can be estimated at 1:1,596 (calculated from Central Bureau of Health Intelligence, 2018). The ratio is particularly low in rural areas, leaving patients to travel long distances to get even basic care. Furthermore, government spending on healthcare is one of the lowest in the world – in 2016–17 only 1.4 per cent of India’s GDP was allocated to healthcare (Rao, 2018). Most Indians rely on private health providers – 79 per cent of urban households and 72 per cent of rural households accessed private health facilities in 2014 (National Sample Survey Office, 2014). The private healthcare space, however, is fragmented and unregulated, with approximately 1 per cent of private hospitals in India being formally accredited (Jyoti, 2017). Affordability of healthcare is a further concern: while 30 per cent of total health expenditure is borne by the public sector, patients’ out-of-pocket expenses account for the remaining 70 per cent (Rao, 2018). The high cost of private healthcare is a major driver of persistent poverty: in 2011, 55 million Indians were pushed below the official poverty line due to healthcare costs, with 38 million of these falling below the poverty line due to the high cost of medication (Selvaraj, Farooqui and Karan, 2018).
New ML or other AI technologies could help address a number of these challenges, by improving access to quality healthcare, particularly in rural and low-income settings; addressing the uneven ratio of skilled doctors to patients; improving the training and efficiency of doctors and nurses, particularly in complex procedures; and enabling the delivery of personalized healthcare, at scale.
The recently released draft National Strategy for Artificial Intelligence in India highlights that ‘[the] increased advances in technology, and interest and activity from innovators, [provide an] opportunity for India to solve some of its long existing challenges in providing appropriate healthcare to a large section of its population’ (NITI Aayog, 2018a). The government is also trying to create a national digital health infrastructure, as articulated in the recent policy documents for the National Health Stack (2018) (NITI Aayog, 2018b) and the National Digital Health Blueprint (2019) (NDHB) (Ministry of Health and Family Welfare, 2019). Key features of this digital infrastructure include the Healthlocker – an electronic national health registry and cloud-based data storage system that would serve as a single source of health data for the nation; a federated personal health records (PHR) framework that would allow data to be available both to citizens and for medical research; a coverage and claims platform that would support large health protection schemes; a national health analytics platform; and a unique digital health ID for each citizen. The government also launched Ayushman Bharat (Healthy India), or the National Health Protection Scheme (2018), which was devised to provide health insurance to families whose incomes are below the poverty line (India.gov.in, 2018). These build on the earlier National Health Policy (2017), which envisaged creating an integrated health information system linked to the Aadhaar system,2 and enhancing public health outcomes through big data analytics. These policies call for a state-backed or state-enabled digital infrastructure for data exchange, which is then accessible to the private sector for further innovation, based on open application programming interfaces (APIs) and national data portability (Press Information Bureau, 2019).
The prioritization of AI for healthcare has created an impetus for greater collaboration between government, technology companies and traditional healthcare providers. For example, NITI Aayog, the government’s official policy think-tank, is working with Microsoft and the medical technology start-up Forus Health to develop a pilot for early detection of DR.3 The Maharashtra state government has also signed a memorandum of understanding with NITI Aayog and the Wadhwani AI group4 to launch the International Centre for Transformational Artificial Intelligence (ICTAI), focusing on rural healthcare (Hebbar, 2018). Similarly, the Telangana state government has adopted the Microsoft Intelligent Network for Eyecare, which was developed in partnership with Hyderabad-based LV Prasad Eye Institute (Gupta, 2018).
Since 2012, about $150 million has been invested in AI start-ups in India, raised predominantly by companies that use AI. Of this, around $77 million was raised in 2017 alone.
Since 2012, about $150 million has been invested in AI start-ups in India (NASSCOM, 2018), raised predominantly by companies that use AI. Of this, around $77 million was raised in 2017 alone. Several new start-ups are already testing and offering a range of solutions automating, for example, analysis of medical tests for the screening and diagnosis of diseases; patient management systems; and early detection and disease prevention systems (Misal, 2018). For example, Niramai, a Bangalore-based start-up, is using ML to detect breast cancer at an early stage. Another start-up, ChironX, employs deep learning algorithms for retinal abnormality detections; and the start-up SigTuple is using AI to create faster diagnostic tools that can enable better primary care and first aid. Large technology companies such as Google and Microsoft have also established research partnerships with leading hospitals to develop and test diagnostic tools (Singh, 2020).
The challenge of delivering quality healthcare at scale presents a strong case for developing AI-based solutions for healthcare in India. However, this process is unlikely to be straightforward or simple, and several questions arise. What are the likely challenges and risks for developing AI-based solutions for healthcare in India? To what extent do they differ from, or resonate with, concerns flagged in global narratives? While AI is still a nascent field in India, with only a small handful of technology companies and start-ups having started to develop and test solutions, early identification of potential risks can help avoid undesirable policy and technological lock-ins. The first part of this chapter identifies some of the key use cases, or areas in which AI is being developed and deployed. It then outlines some of the likely risks and challenges at the different stages of development, adoption and deployment.
Main use cases
Based on desk research and interviews with members of government and industry, we have identified four key areas in which AI solutions for healthcare are being developed. The deployment of AI is still at a very early stage, particularly in the form of clinical interventions. A number of the identified use cases are still at a development and testing stage. Most of the current use cases take the form of decision support systems, followed by process optimization and virtual assistants. Computer vision – one of the more advanced applications of AI – is being used to train AI algorithms to read X-rays and scans to support the processes of disease detection and diagnosis. Only a small handful of companies are developing surgical simulators, personalized health solutions and patient monitoring systems. Moreover, only a small number of interventions use NLP and speech recognition, both of which are critical for meeting diverse linguistic and literacy needs in the country. This is likely to change, however, with growing investments by big tech actors such as Google and Microsoft in the development of these capabilities (The Times of India, 2019). Google, Microsoft and IBM have multiple partnerships with private hospital groups such as Narayana, Apollo and Fortis, as well as partnerships with state governments in India. These are working on a range of solutions, including AI systems for hospital management, disease detection and prediction, as well AI service delivery in remote areas (Sinha, 2018).
The four areas in which interventions are being developed are as follows:
Disease detection and diagnostics
ML is being used to build decision-support systems for diagnostics, as well as in predictive systems for prognostication. Computer vision and DL models are being used to read medical scans such as X-rays, CT scans, PET scans and ultrasound scans. AI-based systems are being used for early detection of tumours – e.g. non-invasive, non-touch and non-radiation approaches to detect breast cancer – as well as predicting cancer recurrence through a risk score. AI-based applications are also being developed and used to build systems that can analyse images of blood. SigTuple, for example, is using an AI platform called Manthana for automated analysis of blood smears as well as for the digitization of blood, urine and semen samples (ET Rise, 2018). Researchers at one of India’s leading government hospitals have developed a tool that leverages thermal imaging and AI-based tests to help predict the onset of haemodynamic shock. AI systems for tuberculosis diagnosis and DR systems are also being developed. Platforms such as OnliDoc and Lybrate are also using AI methods to provide virtual assistance and diagnostics remotely. OnliDoc uses AI for symptom checking and treatment selection (Misal, 2018).
Process optimization
ML processes are being developed to create new efficiencies in areas such as hospital bed management and processing of insurance claims. A few online platforms that assist with helping find a doctor, storing health records, or procuring medicine are using ML to improve efficiencies in these processes. Others are automating the first-level screening of symptoms, finding doctors and booking appointments. Optical character-reading systems are also being used to scan prescriptions and check prescribed medications against the inventory. ML is also being developed for bed management and planning, to predict rates of ‘patient churn’ (turnover of beds), in order to optimize the use of beds in hospitals.
Patient-facing applications
Chatbots are increasingly being used as conversational agents for interaction with patients. The online platform mfine, for example, handles more than 15,000 cases per month – approximately the number of patients handled by Manipal Hospitals, one of Bangalore’s largest conventional hospital groups. Several large hospitals now use chatbots to schedule appointments, converse, and collect basic details and symptoms, before handing over a case to a doctor. In the case of mental health, chatbots are being used as the first level of intervention in behavioural coaching and as a means of addressing loneliness. Systems to monitor or track patient progress are also under development. AI-driven analysis of camera feeds was found to have been used in one case to detect emotional responses and patient fatigue in order both to help monitor patients during the treatment process and to alert medical staff. Sensor data is also being used to monitor patient recovery and response to medication after surgery or treatment. Wearable sensors and AI-based solutions are being developed to measure vital signs and provide doctors with actionable insights.
While at a much earlier stage of development, DL techniques are being developed to derive molecular insights for drug discovery. Surgery simulators that are continually updated are also being developed to train doctors for spine and knee surgery. A surgery simulator centre was recently opened in Delhi.
The main sources of data for developing these systems are primarily historical data held within research institutions, non-profit organizations and medical service providers. In some cases, data are collected by developers through other platforms or healthcare services they already provide. For example, a healthcare platform that enables doctor discovery, online consultations and online medical purchase would then employ user data captured on their platform to build an AI system to optimize and automate certain operations pipelines – doctor-to-patient matching, and doctor discovery based on location data – in their services.
The main sources of data for developing these systems are primarily historical data held within research institutions, non-profit organizations and medical service providers. In some cases, data are collected by developers through other platforms or healthcare services they already provide.
In cases where the models need to be trained on more general aspects, such as conversational ability or image recognition, developers use open-source data to get the model off the ground. However, as India does not have robust medical datasets, start-ups often use publicly available datasets from the US and Europe. If the AI algorithm is a small part of their systems, or they do not possess in-house capabilities, developers also use models available with cloud providers such as Google, Microsoft and Amazon.5 In certain cases, new data are being collected through field experiments; one such example is Wadhwani AI, which is seeking to build a model to estimate and document the weight of a child at birth for a public health census (Goyal, 2019).
Additionally, a few start-ups and initiatives have begun to provide personalized health solutions. Healthi, a digital health and wellness start-up in Bangalore, uses predictive analytics, personalization algorithms and ML to deliver personalized health suggestions. Similarly, Manipal Hospitals is using IBM Watson for Oncology, a cognitive-computing platform, to help physicians discover personalized cancer care options.
Challenges and risks
While AI could bring benefits to healthcare in India, it will certainly not provide a simple solution or fix in an already very complex health landscape involving numerous stakeholders, competing priorities, entrenched incentive systems and institutional cultures. Further new and complex challenges will also arise around data use, privacy and security. This section explores the challenges and risks involved in using AI systems for healthcare solutions in India across three stages: development, adoption and deployment.
Development
While a number of factors are relevant at development stage, two are of particular note: the challenges entailed in obtaining structured, complete and representative datasets; and the human, financial and infrastructural resources needed to build AI solutions. While these challenges exist globally, they are further accentuated in the Indian context.
Access to data
AI systems depend on the availability of large amounts of data. This poses a major impediment for building indigenous AI interventions in India. Datasets for healthcare in India are fragmented, dispersed and incomplete.6 Building AI requires longitudinal data. People often go to different doctors, even for the same diagnosis or treatment; even large hospitals do not have loyal patient followings. A large proportion of these healthcare providers are unaccredited and informal health practitioners, with non-standardized data collection, recording and analysis systems, and differing approaches to medical care more generally.
Digitization practices are poor, uneven and not standardized; and there is no centralized database for health records in India. Even in large hospitals, it is often the case that every time a patient visits, a new registration number or patient file is created for them, and doctors’ prescriptions are handwritten. Frontline health workers in India record patient histories in notebooks, using their own systems of annotation. The data that are readily available for AI companies are thus likely to be unrepresentative of a significant part of the population.
Health policy is also determined at a state level, and not by the central government. This means there are significant variations across states as well as differing levels of readiness to share health data. State governments are often reluctant to share population data because this may reflect poorly on their capacities for governance; misrepresentation or fudging of health data is also not uncommon.7 Inadequate or unrepresentative data can result in poor data quality and coherence, leading to erroneous algorithms and possible misdiagnosis.
Efforts to digitize the health system are now under way. Plans for an Integrated Health Information Program (IHIP) to create EHRs for all citizens, and enable the interoperability of existing EHRs, are currently in development (National Health Portal of India, 2017). However, implementation of EHRs is not harmonized, leading to different interpretations of record digitization and data retention (Paul et al., 2018). Moreover, health workers are often overworked, and may be unable or unwilling to invest the time and effort needed. Capacity limitations have also led to time lags between a health episode and its digitization.
The government is also attempting to create a digital health infrastructure that can enable AI solutions. A recent discussion paper by NITI Aayog outlined ambitions for creating a ‘National Health Stack (NHS)’ to make both personal health records and service provider records available on cloud-based services to private healthcare actors. The NHS is expected to consist of four key elements – electronic health registries of health service providers and beneficiaries, a coverage and claims platform, a federated personal health records framework and a national health analytics platform (NITI Aayog, 2018b).
While having such open data stacks could enable private-sector innovation, a number of issues are yet to be examined by health and other relevant ministries. As noted earlier, datasets are incomplete and unrepresentative, and are scattered across thousands of healthcare providers. Making these data machine readable will be an enormous undertaking, requiring not only human and financial resources, but also coordination across an enormous range of healthcare providers. Moreover, the business case for healthcare providers to share their data has not been examined. The consent-based model that is being proposed as central to India’s data protection framework is also likely to be inadequate. As Mayer-Schönberger and Cukier argue: ‘In the era of big data, the three core strategies long used to ensure privacy – individual notice and consent, opting out, and anonymization – have lost much of their effectiveness’ (Mayer-Schönberger and Cukier, 2017). Beyond the issue of privacy, fundamental questions remain unaddressed as to who owns healthcare data, who should be allowed to use it, and in what way.
Blind spots in data collection
Current AI experiments are dependent on historical data available from select hospitals or research institutes. The trouble with historical data, as has already been well documented in existing studies on AI, is that it will, by definition, reflect certain societal structures of discrimination (Gershgorn, 2018). For example, there are already documented instances of women from lower castes being denied healthcare due to the medical provider’s class elitism (Siddiqui, 2008). Similarly, data from clinical trials being used to inform AI typically under-represent women, minorities and the elderly, as fewer of them are selected for such trials (Hart, 2017). As a result, the medicines formulated through these data are effective only for certain populations. Algorithms trained on these datasets thus risk having certain blind spots, which could in extreme cases lead to misdiagnosis.
In other cases, start-ups are also working with open data repositories, but much of this is for populations in other geographies. This could result in algorithms that are not easily applicable for Indian populations, again contributing to a risk of misdiagnosis. For instance, India’s Manipal Hospitals has linked with IBM Watson for Oncology to aid doctors in the diagnosis and treatment of seven types of cancer. But a number of physicians have already noted that the population on which Watson is trained does not accurately reflect the diversity of cancer patients across the world, and, as a result, the system is heavily biased towards US patients and standards of care (Ross and Swetlitz, 2017).
Infrastructure and costs
AI systems can be expensive to train, test and deploy. Datasets are expensive to collect, and computing power and storage space is expensive. Most healthcare organizations also lack the data infrastructure necessary to collect the data needed to optimally train algorithms – i.e. to test them for bias and adjust the model, and continually monitor and evaluate field outcomes. The unavailability of digital infrastructure required to build AI systems is a further constraint. Cloud-based computing infrastructure is mostly concentrated in servers outside India. As a result, many start-ups have also incorporated themselves outside India. Moreover, as the commentator Shashi Shekhar Vempati noted in a 2016 report, the lack of technological infrastructure has made it difficult to develop applications based on DL techniques. This poses a major challenge for developing AI capacities across different languages, which would be particularly relevant for the adoption of AI in primary health across diverse rural contexts (Vempati, 2016). The shortage of skilled data scientists is a further impediment.
Most Indian healthcare companies do not have the computing power to build AI systems, and rely on cloud services provided by the big tech players – Google and Microsoft in particular.
In practice, these infrastructural constraints frequently lead both Indian hospitals and start-ups that are keen to leverage AI for healthcare to a dependence on a few very large technology companies such as Google and Microsoft. In the case of the Forus-Microsoft partnership for DR screening, the device, built by Forus, sends an image of the screened patient to the cloud, following which the algorithm screens it for DR and sends its interpretation back to the device or the doctor (Singh, 2020). Large technology companies, which already have an advantage because of the large quantity of user data they possess, are likely to have a further advantage over smaller health actors. Lack of regulation also makes data acquisition easier for big tech firms. For example, according to Seema Singh, Google had previously attempted to obtain data from US medical establishments for developing AI solutions for eyecare, but was unable to gain approval beyond what was already in the public domain. It then turned to India, and has now struck up numerous data sharing collaborations with eye hospitals in India (ibid.). Most Indian healthcare companies do not have the computing power to build AI systems, and rely on cloud services provided by the big tech players – Google and Microsoft in particular. In fact, as Singh has commented, big tech is primarily in the healthcare business in order to benefit the sale of its cloud software (ibid.).
Adoption
While adoption is likely to be shaped by a number of factors, two in particular are likely to be particularly relevant in India: the infrastructural and financial feasibility of adoption; and the degree of readiness and acceptance within established healthcare practices.
Affordability and infrastructure
Much of the dominant narrative around AI for healthcare in India focuses on the potential to reach underserved populations, particularly in rural areas lacking infrastructure or sufficient physicians, or among economically weaker sections of society where the population lacks the financial means to access medical facilities (Paul et al., 2018).
However, a closer look at AI adoption reveals that, while a number of pilots are being run for rural healthcare, the bulk of adoption is limited to large private hospitals or clinics. For example, the IBM Watson platform for cancer diagnostics was first implemented by the Manipal and Apollo hospital groups, along with other private hospitals (Kambli, 2019). This is likely to be for reasons of affordability: private medical practitioners, smaller clinics and rural hospitals are unlikely to have the financial means to adopt these solutions or to have spare resources for piloting and experimentation.8 In diagnostics, for example, it would be safe to assume that AI-based solutions are going to be adopted by institutions that already have diagnostic capabilities, such as diagnostic labs and larger hospitals. Some technology companies are exploring partnerships with existing medical device manufacturers to integrate AI solutions into existing devices. However, these solutions are likely to be unviable in the many rural settings that have poor internet connectivity and little digital infrastructure.9 In a select few hospitals in urban settings, these AI solutions are being advertised to patients as an additional layer of diagnosis available to those able to pay the additional costs, thus risking an exacerbation of existing inequities within health systems in India.
This raises important questions about the distribution of AI gains; for instance, as to whether these gains will be limited to the elite few. In contrast to the promise of improving access to quality healthcare for underserved populations, there is a risk that affordability and the lack of a sustainable business case for AI in rural healthcare could be severe constraints for adoption. At present, the biggest winners in the Indian context seem to be the big tech companies who between them provide the cloud infrastructure for most hospitals. Big tech has signed up to numerous partnerships with private hospitals, but the terms of such agreements are not publicly available. Questions around who owns the data, and how value is going to be generated and shared across various stakeholders, are yet to be addressed (Singh, 2020).
In contrast to the promise of improving access to quality healthcare for underserved populations, there is a risk that affordability and the lack of a sustainable business case for AI in rural healthcare could be severe constraints for adoption.
Another critical issue is the process for obtaining medical approvals for adoption and deployment. In the US, the Food and Drug Administration has only recently proposed a framework for dealing with AI in medical devices. In the Indian context, where there are low levels of institutional capacity and acute healthcare needs, big tech has been able to circumvent some of these regulatory hurdles. For example, in a press release in August 2019, Microsoft claimed it had ‘screened’ more than 200,000 people for cardiovascular diseases using ‘the AI-powered API across Apollo Hospitals’, even predicting the risk score for some. However, there is no peer review publication yet. In another example, Google recently published a study where its DL models looked at around 600,000 chest X-rays from Apollo hospitals (Majkowska, Mittel et al., 2019). However, the same dataset was used for training and testing the data – a bad practice that is none the less widely established, and which can be used to show high success rates that are not necessarily valid (Singh, 2020).
Institutional capacities and cultural acceptance
Adoption in rural or underserved areas is likely to depend on existing levels of government support, rather than on direct access to populations or market solutions. As noted earlier, healthcare is state-run in India. Individual states have differing institutional capacities and knowledge systems. Many states may not have the technical expertise to oversee health regulation or develop an ecosystem that encourages innovation and improves access. Furthermore, there are often instances of data fudging among low-performing states and a general reluctance to make data available for public scrutiny. This could also mean that low-performing states are hesitant to permit external scrutiny and intervention.10
Moreover, AI innovations will not by themselves change the incentives that support existing ways of working in the healthcare sector (Rajkomar, Dean and Kohane, 2019). A complex web of ingrained political and economic factors, along with medical practice norms and commercial interests, determine the way healthcare is delivered. Simply adding AI applications to a fragmented system will not create sustainable change (ibid.). The healthcare sector has also traditionally been resistant to the permeation of information and communications technologies (ICT) (Safi, Thiessen and Schmailzl, 2018). Medical professionals often find it time-consuming and laborious to change their standard way of working; some also see it as a form of management control.11 In many cases, uptake has been mostly symbolic, to satisfy management or reporting requirements. Doctors still rely on and prefer handwritten files; in some cases, even where patient data are entered into a digital database, the electronic record is deleted after a printout is taken and filed away (Powell, Tyagi and Ludhar, 2018). Cultural and social attitudes are likely to shape the speed and scale of adoption. For example, a recent study found that women in rural areas tend to seek out informal healers over formal healthcare providers; this was related to factors such as ease of communication, cultural familiarity or resonance, avoidance of social stigma and geographical distance from formal healthcare facilities (Das et al., 2018).
Deployment
Three key sets of challenges will need to be considered at the level of deployment: privacy, misuse and accountability.
Privacy
Healthcare data are highly sensitive, and data breaches can have implications for an individual’s personal autonomy, dignity, and even access to work. In 2016, the hacking of a Mumbai-based diagnostic laboratory database led to the leaking of medical records (including HIV status reports) of more than 35,000 patients. This database held the records of patients across India, and many may still be unaware that their details have been exposed. The database had been subjected to multiple hacks in the previous few years, sometimes up to three times a week. However, no action had been taken by the laboratory concerned to secure the data (Express News Service, 2016).
In March 2018 India’s Ministry of Health and Family Welfare released to the public domain a draft of the proposed Digital Information Security in Healthcare Act (DISHA 2018), which would enable the digital sharing of personal health records between hospitals and clinics. DISHA would provide for a rights-based framework for medical privacy, conferring on patients the right to privacy, confidentiality and security. The draft law requires that each instance of transmission of digital health data gains the explicit prior permission of the owner, and patients would have the right to refuse consent for the generation, storage and collection of their data (Ministry of Health and Family Welfare, 2017).
However, such a framework gives rise to a number of challenges. First, obtaining meaningful consent would require the entire population to have the capacity to make informed decisions about the collection and use of their data. In many cases, patients have low levels of literacy and education; in other cases, there may be consent fatigue, particularly where terms and conditions are difficult and time-consuming to comprehend (Bailey et al., 2018).
The draft legislation also does not contain provisions for instances where the digital health data of the owner have been collected without his or her consent: neither does it mention the status of the data when the owner withdraws their consent (Mohandas and Sinha, 2018). Furthermore, in an AI-equipped world, anonymization of data is not enough: recent studies show that triangulation across multiple data points makes it possible to identify individual users (Culnane, 2017). In addition, with ML models, it is difficult – if not impossible – to identify how a particular piece of data is being interpreted and used for building algorithmic models; it is thus computationally very difficult to ascertain how a particular data source is being used, or to withdraw consent for its use (Shou, 2019).
The current draft Personal Data Protection (PDP) bill, yet to be passed by the Indian parliament, also proposes a consent-based model. The PDP draft legislation designates health data to be ‘critical’ and ‘sensitive’, requiring a set of permissions from the owner before being used. Given that there are three actors in health data co-creation – the patient, the hospital or doctor, and the payer – the bill ignores consent and rights at different stages of the data life cycle. Consent-based frameworks are also inexact as to the amount of patient data that will be disclosed to private players in the system such as insurers, pharmacies and hospitals.
Further concerns arise on account of the linking of health data with the controversial biometric identity project Aadhaar, which has already been documented as suffering multiple privacy and security breaches.
Further concerns arise on account of the linking of health data with the controversial biometric identity project Aadhaar, which has already been documented as suffering multiple privacy and security breaches (Business Line, 2018). There is no clear understanding of how the Aadhaar data will be used, and who will have access to them.
There also seems to be a dissonance between existing digitization policies and privacy policies for healthcare. The draft DISHA prohibits the use of digital health data for commercial purposes, especially by insurance companies, employers, human resource consultants and pharmaceutical companies. However, the proposed NHS has a special platform dedicated to insurance claims and coverage. It remains to be seen how the data protection provisions of the DISHA are going to apply to the NHS. Current policy frameworks seem to be torn between the need to promote innovation and AI development, and the need to have the right frameworks to protect user privacy and establish user control.
The issue of consent must also be considered in the context of the nature of the doctor-patient relationship. As Karunakaran Mathiharan noted in a 2014 report, doctor-patient relationships are characterized by power differentials and cultural notions in India, where a doctor’s authority is often considered absolute and where they are accorded a high level of trust from patients. This is especially true since a significant part of the population falls outside the ambit of formally recognized medical systems, rendering moot the issue of obtaining informed consent (Mathiharan, 2014).
Misuse
The linking of health data with other systems, and the new avenues for discrimination this may create, gives rise to significant concern. Health insurance data, for example, can be leveraged by banks to evaluate eligibility for loans: a poor performance on health indicators could be seen as an indication of an individual’s inability to work, which would increase the likelihood of non-payment. The flow of health data to companies outside the healthcare sector could lead to discrimination in the workplace or in other entitlement and social benefits.
Issues around data security are of equal importance. The Aadhar system, for example, has already been subject to multiple data breaches, with only weak attempts having been made to improve the security infrastructure (Vidyut, 2018). The Digilocker – on which the Healthlocker would be modelled – also has inadequate security measures, raising concerns about the biometric data stored within it. Security concerns are also entailed in the initiation by the government of India of the eSign electronic signature framework (which allows an Aadhaar cardholder to digitally sign a document), since third parties are involved (Jalan, 2019). Sensitive health data will become vulnerable if this model is followed. Although the NDHB refers to the creation of a Security Operations Centre (SOC) and a NDHB Security Policy, it crucially does not cite the procedures to be followed in case of a privacy breach (ibid.). Cyberattacks on medical institutions can be used to tamper with the data or create fake health records. Once cyberattackers have access to an institution’s systems and health records, they can encrypt all the systems to be completely inaccessible and unusable by the victimized medical institutions and demand a ransom. Vulnerability to cyberattacks arises due to many factors: outdated digital infrastructure, for instance, or a lack of awareness and training among medical personnel on the subject of cyberattacks (NovoJuris Legal, 2019). As reported by multiple news agencies, in June 2018 Mahatma Gandhi Memorial Hospital, a trust-run hospital in Mumbai, was affected by a ransomware attack in which hospital administrators found their systems to be locked, subsequently noticing an encrypted message sent by the attackers demanding a ransom payment in bitcoin in exchange for the unlocking of the system. It was reported that the hospital had lost 15 days’ data related to billing and patients’ histories, although the hospital did not incur any additional financial loss (Purandare, 2018).
Accountability
Finally, there is the question of accountability. Who is to be held accountable in the case of misdiagnosis or error? On the one hand, AI systems are currently being envisaged as decision-support systems. They are intended not to replace doctors, but to provide a first layer of screening. In other words, the expectation is that there will be a ‘human in the loop’ to interpret results and point out any errors. However, it is worth asking what type of professional this human might be; and what might be their capacities, and incentives, to check the validity of suggestions produced by an AI system. In rural settings, for example, front-line health workers may not have the knowledge, training or confidence to be able to interpret and challenge AI-generated results. In contexts where doctors are overwhelmed by the number of patients they are treating, and are under pressure to demonstrate efficiency, they may not have the time or incentive structures to correct AI systems in the event of an error. Over-reliance on decision-support systems can create complacency, leading to errors, be they due to blind adherence or inaction (Wickens et al., 2015). These concerns are further accentuated in the context of the weak regulation of the Indian health sector. In recent years, there have been numerous reports of negligence and malpractice of even well-established private hospitals. The main reason that these violations are all too common is the lack of strict and uniform regulation of healthcare in the country (Narayanan, 2017).
Moreover, the question of accountability becomes more complicated when individual health data are being used to aid other decision-making processes – such as credits or loan applications – particularly since the ways in which predictive and self-learning algorithms draw inferences or patterns are hard to identify. Within the AI programmer community, there is a movement to explore fairness, accountability and transparency (FAT) frameworks. But, as others have noted, fairness is a property of social relations, not of code (Selbst et al., 2019). The algorithms must be audited, not only for efficiency and accuracy, but also for issues of social context such as biases and knock-on effects. For instance, the implications of the use of an AI algorithm by a doctor, as opposed to a front-line community health worker (such as an accredited social health activist – ASHA), could be very different, simply due to each practitioner’s individual level of training and ability to be critical of the output.
The use of AI is likely to transform patient-doctor relations, and related systems and rituals of trust. Much of what transpires between doctors and patients relates to relationship-building, and not merely the provision of medical expertise. We then need to ask what safeguards are needed to build trust and encourage buy-in; and how patients are brought into the processes of deliberation and explanation.