The Clinician and Dataset Shift in Artificial Intelligence

Are clinicians adequately prepared to identify circumstances in which AI systems fail to perform their intended function reliably?


NEJM
Letter to the Editor
July 15, 2021


Artificial intelligence (AI) systems are now regularly being used in medical settings,1 although regulatory oversight is inconsistent and undeveloped.2,3 


Safe deployment of clinical AI requires informed clinician-users, who are generally responsible for identifying and reporting emerging problems. 

Clinicians may also serve as administrators in governing the use of clinical AI. 

A natural question follows: are clinicians adequately prepared to identify circumstances in which AI systems fail to perform their intended function reliably?


A major driver of AI system malfunction is known as “dataset shift
.”4,5 

Most clinical AI systems today use machine learning, algorithms that leverage statistical methods to learn key patterns from clinical data. 

Dataset shift occurs when a machine-learning system underperforms because of a mismatch between the data set with which it was developed and the data on which it is deployed.4 

For example, the University of Michigan Hospital implemented the widely used sepsis-alerting model developed by Epic Systems; in April 2020, the model had to be deactivated because of spurious alerting owing to changes in patients’ demographic characteristics associated with the coronavirus disease 2019 pandemic. 

This was a case in which dataset shift fundamentally altered the relationship between fevers and bacterial sepsis, leading the hospital’s clinical AI governing committee (which one of the authors of this letter chairs) to decommission its use. 

This is an extreme example; many causes of dataset shift are more subtle. 

In Table 1, we present common causes of dataset shift, which we group into changes in technology (e.g., software vendors), changes in population and setting (e.g., new demographics), and changes in behavior (e.g., new reimbursement incentives); the list is not meant to be exhaustive.


Successful recognition and mitigation of dataset shift


Successful recognition and mitigation of dataset shift require both vigilant clinicians and sound technical oversight through AI governance teams
.4,5 

When using an AI system, clinicians should note misalignment between the predictions of the model and their own clinical judgment, as in the sepsis example above. 

Clinicians who use AI systems must frequently consider whether relevant aspects of their own clinical practice are atypical or have recently changed. 

For their part, AI governance teams must be sure that it is easy for clinicians to report concerns about the function of AI systems and provide feedback so that the clinician who is reporting will understand that the registered concern has been noted and, if appropriate, actions to mitigate the concern have been taken. 

Teams must also establish AI monitoring and updating protocols that integrate technical solutions and clinical voices into an AI safety checklist, as shown in Table 1.

Table 1.

Overview of Our Recommended Approach to Recognizing and Mitigating Dataset Shift.


About the authors

Samuel G. Finlayson, Ph.D.
Harvard Medical School, Boston, MA

Adarsh Subbaswamy, B.S.
Johns Hopkins University, Baltimore, MD

Karandeep Singh, M.D., M.M.Sc.
University of Michigan Medical School, Ann Arbor, MI

John Bowers, B.A.
Yale Law School, New Haven, CT

Annabel Kupke, B.A., B.S.
Boston University School of Law, Boston, MA

Jonathan Zittrain, J.D., M.P.A.
Harvard Law School, Cambridge, MA

Isaac S. Kohane, M.D., Ph.D.
Harvard Medical School, Boston, MA

Suchi Saria, Ph.D.
Bayesian Health, New York, NY
ssaria@bayesianhealth.com


Supported by grants from the Food and Drug Administration (5 U01 FD005942–05), the Sloan Foundation (FG-2018–10877), the National Science Foundation (1840088), and the National Institute of General Medical Sciences (T32GM007753).

Disclosure forms provided by the authors are available with the full text of this letter at NEJM.org.

Dr. Finlayson and Mr. Subbaswamy contributed equally to this letter.


Originally published at https://www.nejm.org.

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share