What are the regulatory barriers for AI adoption in Health Care? (3/4)


This is an excerpt of the article “Why is AI adoption in health care lagging?”, with a focus on the topic above, preceded by an Executive Summary by the author of the blog.


Brookings
Avi Goldfarb and Florenta Teodoridis
March 9, 2022


Excerpt

by Joaquim Cardoso MSc.
The AI Health Care Unit 
@ The Digital Health Institute 
March 13, 2022


The opportunity for AI in Health Care

  • In 2019, 11% of American workers were employed in health care, and health care expenditures accounted for over 17% of gross domestic product.

  • If AI technologies have a similar impact on healthcare as in other industries such as retail and financial services, then health care can become more effective and more efficient, improving the daily lives of millions of people.

  • However, despite the hype and potential, there has been little AI adoption in health care.

Four important barriers to adoption are:

  1. algorithmic limitations,
  2. data access limitations,
  3. regulatory barriers, and
  4. misaligned incentives

The Main regulatory barriers are described below.


Three types of regulations are particularly important.

  • First, privacy regulations can make it difficult to collect and pool health care data. With especially strong privacy concerns in health care, it may be too difficult to use real health data to train AI models as quickly or effectively as in other industries. 19

  • Second, the regulatory approval process for a new medical technology takes time, and the technology receives substantial scrutiny. Innovations can take years to navigate the approval process.

  • Third, liability concerns may also provide a barrier as health care providers may hesitate to adopt a new technology for fear of tort law implications. 20 Regulation in health care is, appropriately, more cautious than regulation in many other industries.

Complementary regulatory innovations could include changes to all three regulatory barriers: 

  • who owns and can use health care data, 
  • how AI medical devices and software are approved, and 
  • where the liability lies between medical providers and AI developers.

Policy implications


Before discussing potential policy solutions to each of these, it is important to acknowledge that this may not be due to a market failure.

The regulatory barriers have the most direct policy implications.

  • Innovation is needed in the approval process so that device makers and software developers have a well-established path to commercialization.

  • Innovation is needed to enable data sharing without threatening patient privacy.

  • Perhaps least controversially, clear rules on who is liable if something goes wrong would likely increase adoption. 22

Overall, relative to the level of hype, AI adoption has been slow in health care.

  • Policymakers can help generate useful adoption with some innovative approaches to privacy and the path to regulatory approval.

  • However, it might be the familiar tools that are most useful: clarify the rules, fund research, and enable competition.


ORIGINAL PUBLICATION (excerpt) 

3.Regulatory barriers


Some of the algorithmic and data issues derive from underlying regulatory barriers. 

Three types of regulations are particularly important. 

  • First, privacy regulations can make it difficult to collect and pool health care data. 
    With especially strong privacy concerns in health care, it may be too difficult to use real health data to train AI models as quickly or effectively as in other industries. 19
  • Second, the regulatory approval process for a new medical technology takes time, and the technology receives substantial scrutiny. Innovations can take years to navigate the approval process.
  • Third, liability concerns may also provide a barrier as health care providers may hesitate to adopt a new technology for fear of tort law implications. 20
    Regulation in health care is, appropriately, more cautious than regulation in many other industries. 

This suggests that reducing barriers to AI adoption in health care will require complementary innovation in regulation, ultimately allowing opportunities from AI to be realized without compromising patient rights or quality of care. 


Complementary regulatory innovations could include changes to all three regulatory barriers: who owns and can use health care data, how AI medical devices and software are approved, and where the liability lies between medical providers and AI developers.



Policy implications


AI has received a great deal of attention for its potential in health care.

At the same time, adoption has been slow compared to other industries, for reasons we have described: regulatory barriers, challenges in data collection, lack of trust in the algorithms, and a misalignment of incentives.

Before discussing potential policy solutions to each of these, it is important to acknowledge that this may not be due to a market failure.

AI adoption may be slow because it is not yet useful, or because it may not end up being as useful as we hope.

While our view is that AI has great potential in health care, it is still an open question.


The regulatory barriers have the most direct policy implications.


Innovation is needed in the approval process so that device makers and software developers have a well-established path to commercialization.

Innovation is needed to enable data sharing without threatening patient privacy.

Perhaps least controversially, clear rules on who is liable if something goes wrong would likely increase adoption. 22

If we believe AI adoption will improve health care productivity, then reducing these regulatory barriers will have value.


If we believe AI adoption will improve health care productivity, then reducing these regulatory barriers will have value.


The policy implications related to challenges in data collection and the lack of trust in algorithms are more related to continued funding of research than new regulation.

Governments and nonprofits are already directing substantial research funds to these questions, particularly around lack of trust. In terms of misaligned incentives, complementary innovation in management processes is difficult to achieve through policy. Antitrust policy to ensure competition could help, as competition has been shown to improve management quality. Otherwise, there are few policy tools that could change these incentives. 23


Overall, relative to the level of hype, AI adoption has been slow in health care.

Policymakers can help generate useful adoption with some innovative approaches to privacy and the path to regulatory approval.

However, it might be the familiar tools that are most useful: clarify the rules, fund research, and enable competition.


About the authors


Avi Goldfarb
is a consultant with Goldfarb Analytics Corporation, which advises organizations on digital and AI strategy.

Florenta Teodoridis, Assistant Professor of Management and Organization — USC Marshall School of Business

The authors did not receive financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. Other than the aforementioned, the authors are not currently an officer, director, or board member of any organization with a financial or political interest in this article.


Originally published at https://www.brookings.edu on March 9, 2022.


TAGS: AI Powered Health Care

References

See the original publication

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share