Artificial intelligence, often abbreviated as “AI”, is broadly defined as “the science and engineering of making intelligent machines, especially intelligent computer programs”. In layman’s terms, it is the faculty concerned with making machines that can ‘think’ for themselves and take decisions independently based on their knowledge and experience, the way humans do. Although a relatively new field of knowledge, it is expected to disrupt and redefine most sectors, including healthcare, within the next decade, in multiple ways. Some of the functions already demonstrated significantly by AI programs include
- Interpreting radiographs 
- Detecting cancers and heart disease at early stages  
- Deciding best course of medical/surgical intervention for a given condition based upon available evidence
- Performing selected precision surgeries in pilot projects
- Predicting outbreaks of epidemics.
Global corporate giants like IBM, Tesla, Google, Intel and Microsoft have already started working on longterm plans for taking the healthcare industry with a storm using their AI products. Predictions have been made of AI completely replacing your family physician one day, and even wiping out mankind by becoming self-aware as in the movies Terminator and Matrix, but so far both these extreme ends seem fairly distant at the current stage of progress. In such a scenario, one cannot afford to overlook what effects and changes this potent disruptive technology can bring to our day-to-day healthcare.
Benefits that AI can yield to public health –
Active disease surveillance – AI ‘crawler bots’, or computer programs that ‘crawl’ (actively parse) the incoming electronic health records to continuously look for, identify and cross-verify patterns of disease epidemiology, can take off a lot of burden from the backs of public health staff. At places where larger all round surveillance data is available, AI can even go a step further to actually predict infectious disease outbreaks by combining and analysing meteorological, economic, demographic and geographic data of a region. The Shenzhen Center for Disease Control and Prevention (CDC) in China is already running one such facility even as this article is being written. Experiments are also on to create AI bots that crawl social media to detect mental illness from social media behaviour and try to deliver counselling therapy without being detected. However, such extensive surveillance programs carry huge risks for the society if fallen in wrong hands.
Analyzing the big data churned out by precision medicine studies – AI can work wonders in identifying and abstracting patterns of multitudes of biomarkers in pathological processes, a task which virtually impossible to be done manually or by conventional computer programs. A recent collaborative study by CSIR and AIIMS scientists about childhood asthma is a very good prototype of what AI and Big Data can work out together for medical research.
Automated Telemedicine - Perhaps the most sought after advantage AI can bring today to public health sector is that of cost efficient accessibility to specialty opinion and radiodiagnosis. In developing countries where specialists are rarely available in the rural areas and quackery is rampant, AI can prove a relatively safer and cheaper alternative for primary diagnosis and management at remote locations. Especially, AI can fill in the much needed gap in the public healthcare systems in the rural areas of developing countries.
Formulation of best practices and protocols – AI can largely automate the process of meta-review of research literature and formulation of best practices/ standard protocols based upon the available clinical evidence and statistical healthcare data.
Managing Emergency Medical Services control centers and coding/billing operations – A pilot project of AI based EMS centre is already underway in Belgium as this article is written. Medical coding and billing too, being operations that run largely on a fixed set of algorithms and process flows, could be completely taken over by AI in the future.
Nutrition strategy and result monitoring in focus areas – The decision-making for identifying the required nutritional interventions in a particular targeted geographical block based upon available historical health data can be handled by AI systems with much better precision and faster speed than humans. In continuance with it, the monitoring of results and realtime strategy improvisation process too, can make great use of analytical inputs from AI.
Realtime verification and paperwork in medical insurance systems – AI has the potential to help integrate and automate the non-integrated / non-streamlined medical claims processing systems currently rendered slow due to the incompatibility of software and standard formats across multiple organizations in the process flow. For example, a properly trained AI program could efficiently create and correlate case CPT data to ICD10 formats, verify with the doctor, process it to create a report in a format required by the concerned insurance provider for every case, verify coverage as per their policy and communicate the report to the insurance provider’s AI program and coverage exclusions to the caregiver automatically; all within seconds of the Electronic Health Record being created.
Limitations of AI
However, all these benefits are not bereft of their downsides, which are more ethical than technical.
Responsibility issues – Whom to blame if an AI program misses a possible diagnosis or makes a false diagnosis from a rare or faulty radiograph? What if a realtime monitoring AI system fails to identify a referral-worthy case within time? Rigorous AI training can potentially reduce error factor dramatically and even actual incidence of such cases, but the question of onus of medicolegal responsibility remains.
Unique cyber security issues – Apart from conventional cyber security threats like hacking and ransomware that loom large over all kinds of online systems, AI systems could be vulnerable to another new breed of cyber attacks – social engineering ops. Since the AI program ‘learns’ from its experiences and uses them in making its future decisions, malicious attackers could potentially try to inject false experience-based learning data into an unknowing AI algorithm to make its decisions go awry, or even worse, manipulate its decision making to cause loss to some particular stakeholder(s) only. Such injections would become a part of the bot’s memory undetectably and permanently, unless robust detection and removal mechanisms are deployed first. Work still seems to be at a primitive stage on such issues.
Privacy risks – In an increasingly bigdata driven world, where citizens are growing more and more conscious of the value of their personal data, and where most major countries have started recognizing the Right to Privacy as a fundamental human right, cloudbased AI applications can pose a serious compromise on the privacy rights of patients and public at large, creating legal issues in future. Also, AI would give to the governments, huge surveillance capabilities over their citizens, which history has shown, are almost always misused by those in power. Appropriate comprehensive legislation protecting data and personal privacy can help check such adverse ends, but hasn’t become a reality yet in much of the world.
In fact, most governments around the world currently seem to be doing their best to avoid making such laws, or to create privacy laws that are practically useless by leaving inobvious loopholes in framing them.
One major concern being raised around the world about AI is the jobloss it can potentially cause. It is believed by a sizeable section of experts, that AI will eventually replace humans in most routine jobs involving monotonous work, leading huge scale unemployment. Some leading experts like Stephen Hawking and Elon Musk have even gone ahead to forecast a “SkyNet-like” apocalyptic situation if excessive uncontrolled use of AI is made. Would such a consequence really occur? Only time can tell; but we can take some cues from some similar events that occurred in the last two centuries
1. when manufacturing sector was first mechanized
2. when semiconductors replaced vacuum tubes in electronic machines
3. when banking, finance and telecom was first computerized
Historical data of all these events shows that there was no significant job loss in the end, but certainly, a new skill replaced another old one in the industry, and people who had the new skill replaced those having the old one. So potentially, AI training/AI programming could prove the next upgrade that workers in the affected positions would want to start acquiring for their resumes right from now, just as bank accountants learnt computer based accounting software to successfully save their jobs after computerization. A notable example is that of Zhejiang in China, where the government is already charting plans to train and recruit more than a million workers in AI to create an AI Industry Hub. Governments and corporations the world over need to start making their own plans if they are to retain competitive edge without labour problems in the near future.
So who would be the first in the line of fire when AI strikes out jobs?
A simple test given by AI experts to answer this question is,
“If you can COMPLETELY and OBJECTIVELY describe your job in words, you’ll most likely lose it to AI.”
As a generalized forecast, medical coders, billing executives, pathology/radiology technicians and reception staff could be the first people to go out of jobs because of AI.
But will AI replace doctors too? Likely not. Doctors, being at the apex of the healthcare hierarchy and having a lot of subjectivity in their work are unlikely to be affected much for some decades yet.
How can human workers cope / compete with AI?
Human workers can best capitalize of the limitation of AI that it needs to ‘learn’ things first and can essentially think only on the basis of what it has learnt. So the ability to formulate innovative solutions and groundbreaking methods is still largely in the human domain. An AI program monitoring adverse effects in a clinical trial can handle all known side-effects efficiently, but may not be able to judge precisely on a completely new adverse effect that starts arising out of nowhere. Again, AI first needs to be trained for it to work. So AI training jobs are definitely going to be open for experienced workers during the first decade of the AI surge, but these trainer humans would be simply digging the graves of their own jobs and of those of millions of other people too.
‘Innovation’, in general, is considerably out of the reach of AI as of today (caveat, because it may after all, arise when the machine learning gets sufficiently advanced and ‘deep’, as they call it in industry jargon). Till then, the ability to think ‘outofthebox’ would become the biggest USP of a human workforce against AI in the next few decades.
As with any efficiency measure, AI holds substantial benefits for the public health sector, especially so in the developing countries. Its gradual adoption remains a future certainty; but the change needs to be managed incrementally using a balanced combination of technical, legal and socio-economic measures for it to succeed without harming any stakeholders.
About the Author –
Dr. Harshad Rajandekar is a practising physician and healthcare management consultant based out of Nasik, India. He takes keen interest in analyzing the latest trends of technological advancements in healthcare industry, and their impact on the commercial, socio-economic, legal, ethical, environmental and political landscape of the future. More about him and his contact info available at his LinkedIn profile here.