NEJM Catalyst
Christina Silcox, PhD
May 2, 2022
towardsdatascience
Summary
Researchers at the Duke-Margolis Center for Health Policy explored how health care systems choose which specific artificial intelligence tools they use for improving population and individual health.
Duke-Margolis’ Digital Health Policy Fellow describes best practices gleaned from this research that health systems should consider if they want to see successful implementation of AI tools at their organizations, along with return on investment.
Each individual health system is different than the next.
AI tools will most benefit systems that plan well for and have processes in place for choosing the right tool for their particular organization.
Each individual health system is different than the next.
AI tools will most benefit systems that plan well for and have processes in place for choosing the right tool for their particular organization.
In the special artificial intelligence theme issue of NEJM Catalyst Innovations in Care Delivery, “ How Health Systems Decide to Use Artificial Intelligence for Clinical Decision Support” explores how health systems decide which AI products to use. Speaking at the NEJM Catalyst “ AI and Machine Learning for Health Care Delivery “ event, senior author Christina Silcox, PhD, shares best practices for choosing health care AI tools.
Potential for AI in the health space is enormous, from population health to individual health, health system administration, and biomedical innovation.
Silcox and fellow researchers at the Duke-Margolis Center for Health Policy focused on how health systems choose which specific population and individual health tools to use.
In interviews with a range of health systems, they learned that these systems are still figuring it out.
“They want to be sure that these [AI] systems have a return on their investment,” Silcox says, whether that’s better health outcomes, lower costs, or improved efficiencies.
“Most systems are getting a lot of pitches from companies and don’t yet have standardized processes to evaluate tools and AI companies, and to select the best products for their priorities,” Silcox says.
This means many of the tools used come from existing vendor relationships, physician champions, products at conferences, or even media hype.
“This is starting to change, though.
About a third of the health systems we spoke with were developing processes for the evaluation and selection of AI. They are also hiring expert staff to help with this process.”
What factors did health systems consider as they built these processes, and that others should consider for successful implementation?
- AI is a tool, not a solution itself
- Understand your workflows and resources to know if results will be actionable
- Consider how well the AI tool will work with your specific patient populations
- Know how your workforce will use the AI tool
- Calculate all costs and potential returns.
1.AI is a tool, not a solution itself.
There has to be a priority problem for it to help solve.
Priorities will differ depending on who is selecting the tool.
For example, population health management tools are often higher priority for overall health system management, whereas individual health tools would be more likely at the department level.
2. Understand your workflows and resources to know if results will be actionable.
“You can predict something will happen, but if you don’t have processes in place to prevent or mitigate the predicted event, it often won’t have much value,” says Silcox.
Action costs should be added to the cost of the tool when determining return on investment.
3.Consider how well the AI tool will work with your specific patient populations.
Patient populations vary, and a lot of off-the-self AI tools are trained on data from systems that may not have the same patient mix as yours, notes Silcox, or the same types of workflows, insurance processes, and so on.
“This has pushed some of the systems we spoke with into depending on homegrown solutions or partnering with commercial vendors to pilot and refine their systems on the hospital’s own data,” she says.
Bias is also a big concern. “Hospital systems have responsibility to know if these systems work as well for subpopulations within their patient base.”
Patient populations vary, and a lot of off-the-self AI tools are trained on data from systems that may not have the same patient mix as yours, … or the same types of workflows, insurance processes, and so on
4.Know how your workforce will use the AI tool.
Do tools that aid in decision-making work well in a smaller subset of patients where clinicians themselves don’t already have the answer?
If the AI doesn’t work well on that question or subset, then the clinician will not trust the tool.
Consider which tools would actually make a difference and not just move the bottleneck.
5. Calculate all costs and potential returns.
That means cost of not just the AI tool itself, but also integration into any acquired data systems, contract testing, user training, results tracking, workflow change, and testing the system over time to prevent performance drift.
Some AI tools can be reimbursed directly, some AI processes can increase efficiencies such that hospitals can see more patients, and those that prevent or reduce future costs can also have significant returns.
And yet, Silcox cautions that some of those same tools could decrease revenue and therefore be seen as less attractive.
“However, it’s important to remember that overall improved patient care and outcomes has its own return, through customer and worker satisfaction, and also the reputation of the hospital system itself,” she says.
Originally published at https://catalyst.nejm.org on May 2, 2022.
From the NEJM Catalyst event AI and Machine Learning for Health Care Delivery, sponsored by Advisory Board, March 24, 2022.