Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence


JAMA Network
Cornelius A. James, MD1; Robert M. Wachter, MD2; James O. Woolliscroft, MD3
March 21, 2022.
livecoconutoil

Executive Summary


by Joaquim Cardoso MSc.
Digital Health and AI . Institute
Founder, CEO and Chief Strategy Officer (CSO)

March 31, 2022


Introduction

  • Artificial intelligence (AI) and machine learning (ML) are poised to transform the way health care is delivered.

  • While AI has been critiqued as being in its “hype cycle” … it is likely that every medical specialty will be influenced by AI, and some will be transformed.

  • As AI takes on a larger role in clinical practice, it is clear that multiple levels of oversight are needed.

  • However, even with appropriate outside oversight, the importance of clinician review and trust of these technologies cannot be overstated.

Welcome Skepticism, Avoid Cynicism

  • Like all new technologies introduced into clinical practice, skepticism about AI is appropriate while awaiting rigorous evidence of consistent

  • While the unanticipated consequences are multifactorial in nature, one recurring theme has been a failure to carefully consider the effect of EHRs on end users. … it has been postulated that lack of early engagement of frontline clinicians adversely affected the implementation of EHRs.

  • AI represents entering a new era of potentially practice-changing technology …Hundreds of AI-based start-ups have been founded, and the digital giants (such as Apple, Microsoft, Google, and Amazon) are investing heavily.

  • … equipping clinicians with the skills, resources, and support necessary to use AI-based technologies is now recognized as essential to successful deployment of AI in health care.

Transparency and Trust

  • While EHR adoption was driven by federal mandates and incentives, a similar scenario is unlikely for AI. Rather, its adoption is more likely to be governed by traditional return-on-investment considerations

  • However, these considerations do not necessarily mean that government will not have a role …

  • Despite relatively weak evidence supporting the use of AI in routine clinical practice health care settings, AI models continue to be marketed and deployed. A recent example is the Epic Sepsis Model … that performed significantly worse in correctly identifying patients with early sepsis

  • Studies like this highlight the need for rigorous reporting standards and review of AI products.

  • A robust federal approval process such as the process used for pharmaceuticals is necessary.

  • … standardized, reliable, evidence-based reporting guidelines for AI clinical trials and relevant studies used to evaluate the utility of AI technologies needs to be developed.

Critical Appraisal Guidelines

  • Frontline clinicians must be comfortable appraising medical literature that relates to clinical AI.

  • An analogy is the evidence-based medicine (EBM) movement …

  • The development and promotion of appropriate user guides will help support this transformation.

Clinicians and Patients (Shared Decision-making)

  • As AI-based predictions and algorithms continue to inform medical decisions, patients and clinicians must rethink shared decision-making as decisions may well now involve a new member of the team — an AI-derived algorithm.

Conclusions

  • AI will soon become ubiquitous in health care.

  • Building on lessons learned as implementation strategies continue to be devised, it will be essential to consider the key role of clinicians as end users of AI-developed algorithms, processes, and risk predictors.

  • It is imperative that clinicians have the knowledge and skills to assess and determine the appropriate application of AI outputs, for their own clinical practice and for their patients.

  • Rather than being replaced by AI, these new technologies will create new roles and responsibilities for clinicians.

ORIGINAL PUBLICATION (full version)

Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence


JAMA Network
Cornelius A. James, MD1; Robert M. Wachter, MD2; James O. Woolliscroft, MD3
March 21, 2022.


Introduction


Artificial intelligence (AI) and machine learning (ML) are poised to transform the way health care is delivered.
 

AI is the use of computers to simulate intelligent tasks typically performed by humans. 

ML is a domain of AI that involves computers automatically learning from data without a priori programming. 

While AI has been critiqued as being in its “hype cycle” (throughout this article, AI will be used as shorthand for AI and ML), over time, it is likely that every medical specialty will be influenced by AI, and some will be transformed.1 


While AI has been critiqued as being in its “hype cycle” … it is likely that every medical specialty will be influenced by AI, and some will be transformed.

As AI takes on a larger role in clinical practice, it is clear that multiple levels of oversight are needed. 

However, even with appropriate outside oversight, the importance of clinician review and trust of these technologies cannot be overstated. 

This Viewpoint outlines steps that could allow clinicians to be engaged and invested participants in health care that includes AI. 

As AI takes on a larger role in clinical practice, it is clear that multiple levels of oversight are needed.

However, even with appropriate outside oversight, the importance of clinician review and trust of these technologies cannot be overstated.


Welcome Skepticism, Avoid Cynicism


Much has been written about the potential promise and peril of AI in health care, with both media and academia contributing to the hype, including predicting that certain specialties will be replaced by machines and a developer of ML technology saying, “We should stop training radiologists.”2 

Like all new technologies introduced into clinical practice, skepticism about AI is appropriate while awaiting rigorous evidence of consistent success and benefit in clinical settings.

Like all new technologies introduced into clinical practice, skepticism about AI is appropriate while awaiting rigorous evidence of consistent success and benefit in clinical settings.


AI is not the first application of information technology that promised more efficient and effective health care. 

Some lessons can be drawn from the earlier introduction of electronic health records (EHRs). 

Partly driven by federal incentives, between 2008 and 2017, the US went from fewer than 10% of its hospitals having a functioning EHR to fewer than 10% not having EHR systems. 

EHRs have delivered on some promises (for example, improved patient safety, particularly around medication use), but concerns have been raised regarding the effects of EHRs on clinician well-being and professional satisfaction and the patient-physician relationship.3 

While the unanticipated consequences are multifactorial in nature, one recurring theme has been a failure to carefully consider the effect of EHRs on end users.

In addition, it has been postulated that lack of early engagement of frontline clinicians adversely affected the implementation of EHRs.


While the unanticipated consequences are multifactorial in nature, one recurring theme has been a failure to carefully consider the effect of EHRs on end users.


In addition, it has been postulated that lack of early engagement of frontline clinicians adversely affected the implementation of EHRs.


A decade after the transformation of health care from a pen-and-paper–based record system to an EHR-based enterprise, AI represents entering a new era of potentially practice-changing technology

Billions of dollars are being invested in health care AI and related research. 

Hundreds of AI-based start-ups have been founded, and the digital giants (such as Apple, Microsoft, Google, and Amazon) are investing heavily. 

Anticipating that AI is entering the health care mainstream, the National Academy of Medicine report Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril recommended engaging and educating the community regarding data science and AI.4


AI represents entering a new era of potentially practice-changing technology …Hundreds of AI-based start-ups have been founded, and the digital giants (such as Apple, Microsoft, Google, and Amazon) are investing heavily.


Importantly, equipping clinicians with the skills, resources, and support necessary to use AI-based technologies is now recognized as essential to successful deployment of AI in health care

To do so, clinicians need to have a realistic understanding of the potential uses and limitations of medical AI applications. Overlooking this fact risks clinician cynicism and suboptimal patient outcomes.

… equipping clinicians with the skills, resources, and support necessary to use AI-based technologies is now recognized as essential to successful deployment of AI in health care.


Transparency and Trust


While EHR adoption was driven by federal mandates and incentives, a similar scenario is unlikely for AI. 

Rather, its adoption is more likely to be governed by traditional return-on-investment considerations. 

However, these considerations do not necessarily mean that government will not have a role: 

  • potential regulatory issues, 
  • legal liability, and 
  • social biases will all shape the adoption of AI, 

and policy makers are also likely to become involved with these matters.

For example, given the far-reaching potential implications and potential for widescale harm caused by AI algorithms, guidelines, policies, and laws at international, federal, and state levels are necessary.


While EHR adoption was driven by federal mandates and incentives, a similar scenario is unlikely for AI.

Rather, its adoption is more likely to be governed by traditional return-on-investment considerations.

However, these considerations do not necessarily mean that government will not have a role …


Despite relatively weak evidence supporting the use of AI in routine clinical practice health care settings, AI models continue to be marketed and deployed. 

A recent example is the Epic Sepsis Model. While this model was widely implemented in hundreds of US hospitals, a recent study showed that it performed significantly worse in correctly identifying patients with early sepsis and improving patient outcomes in a clinical setting compared with performance observed during development of the model.5


Despite relatively weak evidence supporting the use of AI in routine clinical practice health care settings, AI models continue to be marketed and deployed.

A recent example is the Epic Sepsis Model … that performed significantly worse in correctly identifying patients with early sepsis


Studies like this 5 highlight the need for rigorous reporting standards and review of AI products.


A robust federal approval process such as the process used for pharmaceuticals is necessary.


Studies like this 5 highlight the need for rigorous reporting standards and review of AI products. 

A robust federal approval process such as the process used for pharmaceuticals is necessary. 

At the institutional level, a process analogous to the Clinical Laboratory Improvement Amendments certification for laboratory studies could help ensure that not only are the criteria for adoption and implementation of algorithms rigorous but also that ongoing assessments of their applicability and accuracy occur.


However, consensus as to the information required to make clinical implementation decisions is lacking. 

Therefore, standardized, reliable, evidence-based reporting guidelines for AI clinical trials and relevant studies used to evaluate the utility of AI technologies needs to be developed. 

Absent such reporting standards, clinician trust in and appropriate use of AI-based technologies will be hindered.


… standardized, reliable, evidence-based reporting guidelines for AI clinical trials and relevant studies used to evaluate the utility of AI technologies needs to be developed.


Critical Appraisal Guidelines


Frontline clinicians must be comfortable appraising medical literature that relates to clinical AI. 

An analogy is the evidence-based medicine (EBM) movement, which created users’ guides equipping clinicians with the skills to read and appropriately apply research to patient care.


Frontline clinicians must be comfortable appraising medical literature that relates to clinical AI.

An analogy is the evidence-based medicine (EBM) movement …

The development and promotion of appropriate user guides will help support this transformation.


Building on the proven EBM framework, Liu and colleagues developed a users’ guide for evaluating articles that use ML.6 

Additional guides are needed to facilitate evaluation of studies that use AI to help answer questions related to prognosis, harm, therapy, and cost-effectiveness. 

As clinicians’ sophistication in evaluating the AI literature grows, it is likely there will be a concomitant increase in rigor and transparency of AI trial reporting. 

The development and promotion of appropriate user guides will help support this transformation.


Clinicians and Patients (Shared Decision-making)


At about the time that the EBM movement was being launched, a parallel movement began promoting shared decision-making between patients and clinicians. 

As AI-based predictions and algorithms continue to inform medical decisions, patients and clinicians must rethink shared decision-making as decisions may well now involve a new member of the team — an AI-derived algorithm. 

Ultimately, clinicians will bear much of the responsibility to successfully broker the triadic relationship between patients, the computer, and themselves

Clinicians will need to explain the role that AI has in their reasoning and recommendations. 

Over time, this relationship is likely to change, with the possibility of some decisions being made directly by patients and families based on AI recommendations, bypassing the clinician. 

Navigating this transition — and finding the appropriate role for credentialed experts in it — will be a significant challenge in a health care system transformed by AI.


As AI-based predictions and algorithms continue to inform medical decisions, patients and clinicians must rethink shared decision-making as decisions may well now involve a new member of the team — an AI-derived algorithm.



Conclusions


AI will soon become ubiquitous in health care. 

Building on lessons learned as implementation strategies continue to be devised, it will be essential to consider the key role of clinicians as end users of AI-developed algorithms, processes, and risk predictors. 

It is imperative that clinicians have the knowledge and skills to assess and determine the appropriate application of AI outputs, for their own clinical practice and for their patients. 

Rather than being replaced by AI, these new technologies will create new roles and responsibilities for clinicians.


Table of Contents (TOC)

  • Introduction
  • Welcome Skepticism, Avoid Cynicism
  • Transparency and Trust
  • Critical Appraisal Guidelines
  • Clinicians and Patients (Shared Decision-making)
  • Conclusions

References

See the original publication


About the authors

Cornelius A. James, MD1; 
Robert M. Wachter, MD2; 
James O. Woolliscroft, MD3

1Departments of Internal Medicine and Pediatrics, 
University of Michigan, Ann Arbor

2Department of Internal Medicine, 
University of California, San Francisco

3Departments of Internal Medicine and Learning Health Sciences, 
University of Michigan, Ann Arbor

TAGS: AI Powered Health Care; Digital Health Systems; USA; EHR; AI Health Care Failure Cases; Sepsis; AI Guidelines


Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share