As technology advances, Artificial Intelligence (AI) will play an increasingly greater role in both simple and complex tasks that humans were previously tasked to complete. The healthcare sector is no exception, with Clinical Decision Support (CDS) software unlocking the potential for hospitals to assess individual diagnoses while cutting costs. As the adoption of this technology into both the local and international healthcare scene is still in its early stages, who is responsible when the doctor– whether aided or fully reliant on CDS systems– diagnoses their disease wrongly?
Patients would often look to the law to find an objective answer to this question. However, legislation specifically governing AI technology has yet to be enacted in Singapore. Both manufacturers of AI technology, and organisations seeking to adopt them, must rely on present national or industrial level rules when relying on such technology. Similarly, patients have to rely on present medical negligence laws when seeking legal recourse for a negligent diagnosis.
However, are present laws able to appropriately allocate the responsibility of a negligent diagnosis to the multitude of stakeholders? AI poses several new legal and corporate challenges to its use. For instance, disputes might arise from the autonomous nature of AI: how should the technology, and by vicarious liability the manufacturers, share responsibility for wrongful or negligent diagnoses?
While there may not yet be a definite answer to this question, this article looks at the possible arguments and challenges that AI introduces to the area of medical malpractice.
How Does CDS Technology Work?
While a medical student has to go through years of school and training to accumulate the knowledge and experience necessary in diagnosing and treating patients, CDS systems are able to condense this duration into a matter of moments. CDS systems are often Big Data systems, which means information like the patient’s symptoms, lab reports, and family history, can be reviewed against thousands of other profiles and diagnoses[1].
Products like IBM’s Watson’s CDS platform is capable of understanding doctors’ inputs in natural language, and then cross-referencing keywords in that input with medical literature to reach a conclusion[2]. Implementing such technology often takes time due to patient privacy concerns, resistance from doctors, and overall organisational restructuring to commit and streamline work processes.
However, problems may still arise from Big Data systems like CDS software. Reports have already arisen regarding “racist AI”[3], where Big Data treats prejudiced judgements like any other factual trend. This not only highlights the likelihood for AI to make systemic errors, but also raises the question of who ought to be responsible for such errors— the hospital’s data, the AI, or the doctor responsible for the patient?
Current AI Regulation?
We can look to the steps currently undertaken to regulate AI-usage to understand how liability might be allocated in the future. How both are done will directly impact the progress of AI development and adoption, which subsequently influences our technological and economic growth. This is balanced against the upkeep of a safe and reliable environment— which is especially so in the healthcare sector.
So how are CDS tools be regulated and utilised at present?
- All medical devices are strictly regulated by the Health Sciences Authority (HSA). Diagnostic tools are no exception, and must be registered with the Authority before sale to healthcare practitioners. Product safety and margins of errors must be declared at the application phase[4], and a comprehensive system exists for the reporting of potential “harm to users, nonconformity to quality, safety, and performance requirements”[5].The HSA will thus take action to rectify the CDS system’s supply or use, if it is reported that it had contributed to a wrongful or negligent diagnosis. Such a recourse could be separate from a patient’s medical malpractice claim, or a hospital’s potential legal claim. The latter is made more unlikely given the avenues the HSA has provided for error-reporting.
- The Personal Data Protection Commission (PDPC) has proposed a model for AI Governance, which recommends corporate governance frameworks for firms looking to adopt AI in their operations.Without sweeping legislature on the use of AI, the emphasis on a subjective approach to AI management highlights the possibility for firms to resolve their disputes internally. It also brings to mind the different approaches hospitals or firms may have to using AI, and the subsequent agreements they may have with AI manufacturers on liability in lieu of a deficient product.
The information provided by the HSA, and the AI governance structure in hospitals, will inform investigations into the liability in lieu of a misdiagnosis. Given the variance between hospitals in the extent of AI-usage, it is likely that a complex, differentiated approach will be used to assign liability between manufacturers and hospitals.
Medical Negligence and AI
It is undoubtable that the individual doctor or hospital should be held responsible to some degree in the event of a wrongful or negligent diagnosis. When AI plays a significant role in the diagnosis, should health practitioners be held less accountable or negligent?
In medical negligence cases, proving negligence in such cases involves the Bolam test[6]. The test asks of doctors to show that they have considered all the relevant options and risks, and that their decision is defensible. This defensibility refers to the ability of their decision to hold up against expert opinion.
How AI complicates the Bolam test is through the introduction of another opinion beyond the doctor’s professional one. Hospitals will often set a standardised direction to the use of the CDS system, which would likely involve doctors taking into consideration the AI’s autonomous judgement before diagnosing the patient.
Output from CDS systems is based on medical literature, hospital records, and patient history. However, it may be difficult to obtain a fully articulated justification for its decisions when much of the trends and calculations are performed within its own programming. Even if an explanation were generated, it would be supported by sensitive patient records or previously contentious human judgement (as mentioned above).
An error by the AI is thus not the same as a machine’s physical defect or accident. The AI only provides information, and is not solely responsible for the diagnosis or treatment. In fact, it is doctors or hospitals that have the autonomy to decide whether or not to follow the AI’s decision. Each wrongful or negligent diagnosis by an AI thus begs more questions than conventional ones:
Can doctors be held responsible for the wrongful or negligent diagnosis if they recognise the merits of the AI’s decision, and agree with its diagnosis? If they had just been following hospital protocol to agree with the AI’s decision, unless in extraneous circumstances, can their hospital be blamed? Can doctors choose to blame the hospitals for limiting or overriding their professional judgement vis-à-vis the AI’s?
As mentioned before, assigning full responsibility to a single party will stifle the progress of AI development in the healthcare sectors. Hospitals and doctors will become reluctant to adopt AI into their regular practice, which may be detrimental from a progressive and productivity standpoint. Given the difficulty of this situation, it is no wonder no complete legislation or set of rules have been firmed up to guide our decisions in such cases.
Who is responsible?
It is clear that from the manufacturing of the AI, to its use in hospitals, each stakeholder in the process may share some responsibility for a misdiagnosis. Currently, the Singapore government’s approach to the issue has been to reinforce the importance of AI governance frameworks within firms/hospitals to mitigate or anticipate such problems. There are also established regulatory boards who will likely clarify and mandate levels of safety for CDS systems.
If a law suit were to arise, patients should understand that there is no hard-and-fast rule to assign liability to all parties involved. It is likely that their individual cases will be looked into deeply, to determine whether the doctor can be absolved of any responsibility in the misdiagnosis. This would likely be the case if the AI nor its manufacturers can produce a justification for its decision, or if the hospital’s AI governance framework is underdeveloped. In this case, liability may be shared or shifted to these other parties.
Medical liability in AI is certainly an area for everyone to take note of. As the Singaporean healthcare system expands due to our ageing population, the adoption of AI may significantly improve the quality and efficiency of our hospitals. However, without a proper legal understanding of AI and its relationship to medical malpractice, we will continue to meet obstacles and it is a long road for us to travel before we perfect the adoption of this new generation of technology in the field of diagnoses of diseases.
References
[1] https://www.dicardiology.com/article/advances-clinical-decision-support-software
[2] https://www.ibm.com/developerworks/library/os-ind-watson/index.html
[6] http://www.smj.org.sg/sites/default/files/4301/4301l1.pdf
Have a question or need legal advice?
If you have a legal question regarding legal issues concerning medical negligence, you can request a quote with Pratap Kishan or other lawyers. With Quick Consult, you can check out in minutes and for a transparent, flat fee from S$49, the lawyers will call you back on the phone within 1-2 days to answer your questions and give you legal advice.
This article is written by Pratap Kishan from Ho Wong Law Practice and edited by Justin Lim from Asia Law Network.
This article does not constitute legal advice or a legal opinion on any matter discussed and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and practice in this area. If you require any advice or information, please speak to a practicing lawyer in your jurisdiction. No individual who is a member, partner, shareholder or consultant of, in or to any constituent part of Interstellar Group Pte. Ltd. accepts or assumes responsibility, or has any liability, to any person in respect of this article.