Popular sepsis prediction tool less accurate than claimed

The algorithm is currently implemented at hundreds of U.S. hospitals.

University of Michigan — Lab Blog
Kelly MalcomJune 21, 2021

One in three patients who dies in a hospital has sepsis, a severe inflammatory response to an infection, marked by organ dysfunction, according to the Centers for Disease Control and Prevention. This heavy toll makes predicting which patients are at risk for developing the devastating condition a top priority for clinicians.

Additional motivation to identify and treat sepsis cases lies in the fact that sepsis serves as a system-level quality measure, with hospitals judged by both the by the federal Department of Health and Human Services and the CDC on their sepsis rates. Complicating efforts to reduce sepsis is how difficult it can be to diagnose — both accurately and quickly.

“Sepsis is something we can know occurs with certainty after the fact, but when it’s unfolding, it’s often unclear whether a patient has sepsis or not,” said Karandeep Singh, MD, MMSc, assistant professor of Learning Health Sciences and Internal Medicine at Michigan Medicine. “But the cornerstone of sepsis treatment is timely recognition and timely therapy.”

Singh and his colleagues recently evaluated a sepsis prediction model developed by Epic Systems, a healthcare software vendor used by 56% of hospitals and health systems in the U.S. In a new paper published in JAMA Internal Medicine, they reveal that the prediction tool performs much worse than indicated by the model’s information sheet, correctly sorting patients on their risk of sepsis just 63% of the time.

The discrepancy lies in how the model was developed, explained Singh. The first problem, he says, is that the model incorporates data from all cases billed as sepsis, which is problematic because “people bill differently across services and hospitals and it’s been well recognized that trying to figure out who has sepsis based on billing codes alone is probably not accurate.” Second, in the model’s development, the onset of sepsis was defined as the time the clinician intervened — for example, ordering antibiotics or lab work.

“In essence, they developed the model to predict sepsis that was recognized by clinicians at the time it was recognized by clinicians. However, we know that clinicians miss sepsis.”

To evaluate the model using a definition of sepsis more closely aligned to that used by Medicare and CDC, the research team looked at close to 40,000 hospitalizations at Michigan Medicine from 2018–2019, removing scores from patients who were alerted by the model to have sepsis after a clinician had already intervened. Doing so brought the tool’s area under the curve from 76–83% as reported by Epic Systems to 63% determined by the validation study.

What’s more, the model sent out an alert on nearly 1 in 5 of all patients, with most of those patients not actually having sepsis. “When it alerts, the chance of a patient actually has sepsis during the remainder of their hospital stay is 12%. What that essentially means is that even if you only evaluated people the first time the system alerted, you’d still need to evaluate 8 people to find one case of sepsis,” said Singh.

Prediction tools come with a trade-off, noted Singh. “The tradeoff is basically between generating alerts on a patient who turned out not to have the predicted condition or not generating alerts on patients who do.” But in this instance, if a health system is using the Epic sepsis model to improve its quality measures, “it’s not really going to be able to do that.”

The results of the study point to a need for more regulatory oversight and governance of clinical software tools, said Singh, as well as a need for more open-source models that can be easily externally validated and turned off if it turns out they aren’t useful.

He added that Epic isn’t wrong in their analysis. “We differ in our definition of the onset and timing of sepsis. In our view, their definition of sepsis based on billing codes alone is imprecise and not the one that is clinically meaningful to a health system or to patients.”

Additional authors include Andrew Wong, M.D.; Erkin Otles, MEng; John P. Donnelly, Ph.D.; Andrew Krumm, Ph.D.; Jeffrey McCullough, Ph.D.; Olivia DeTroyer-Cooley, B.S.E.; Justin Pestrue, MEcon; Marie Phillips, B.A.; Judy Konye, M.S.N., R.N.; Carleen Penoza, MHSA, R.N.; and Muhammad Ghous, MBBS.

Paper cited: “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients,” JAMA Internal Medicine. DOI: 10.1001/jamainternmed.2021.2626

Originally published at: https://labblog.uofmhealth.org/Popular sepsis prediction tool less accurate than claimed

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts
Read More

How to turn AI into ROI

BCG — Boston Consulting GroupShervin Khodabandeh2019 BCG — MIT Report  High Hopes — but Mixed Results Across multiple industries, organizations are starting to integrate AI…

Subscribe

PortugueseSpanishEnglish
Total
0
Share