Prospective Real-World Study Finds Epic Sepsis Model Speeds Antibiotics

The model found patients whose physicians received artificial intelligence-based alerts also spend fewer days in the hospital.

A new report is bolstering the case for the use of an algorithm-based early warning system designed to help healthcare providers more quickly detect patients experiencing sepsis.

A study published in the journal Critical Care Medicine shows the algorithm led to faster administration of antibiotics among septic patients and ultimately shortened the number of days those patients were hospitalized.

The product at the center of the study is Epic Systems Corp.’s sepsis prediction model, which is built directly into patient health records. It uses insights from Epic’s data sets to monitor patient metrics and identify those patients who may be at risk of sepsis. Patients at high risk are flagged, giving human healthcare providers the nudge to investigate and, if warranted, intervene.

Principal investigator Yasir Tarabichi, MD, MSCR, the director of clinical informatics for research support at Cleveland-based MetroHealth, told Contagion that sepsis is a good candidate for this type of early warning system.

“The reason why it potentially lends itself to this solution is because it's complicated and hard to identify,” he said. “And on the therapeutic side, identifying it early could improve outcomes, primarily through the administration of antibiotics and timely fashion.”

Still, Tarabichi said while the potential benefits were compelling, he and his colleagues wanted to see what benefits the system could produce in a real-world setting. To find out, they constructed a prospective randomized quality improvement study, in which patients who sought emergency care at a single healthcare facility were randomized into a standard care group or a group whose providers had access to Epic’s early warnings.

Of the 598 patients included in the final analysis, 285 patients were in the early warning system cohort. Those patients’ providers received alerts when the system predicted possible sepsis. Alerts for the 313 patients in the standard care group were silenced, though a timestamp of alerts was recorded. The trial ran from August to December of 2019.

The trial had two main endpoints: time to antibiotic administration, and days spent outside of the hospital over the following 28 days. In both categories, the sepsis early warning system outperformed standard care. Time to antibiotic treatment in the early warning group was a median of 2.3 hours (interquartile range, 1.4–4.7 hours) following emergency department arrival, while the standard group received antibiotics a median of 3.0 hours after arrival (interquartile range, 1.6–5.5 hours). The early warning group had a median of 24.1 days alive outside the hospital in the four weeks following presentation; the standard care group had a median of 22.5 days.

Tarabichi said while most patients in the study were getting antibiotics relatively quickly, data show that the sooner a patient with sepsis receives antibiotics, the greater the likelihood of survival.

“So the fact that you can increase [the speed of antibiotic administration] by a little under an hour, that's still important,” he said. “That’s moving the needle.”

Tarabichi said the group that most appreciated the early warning system was the health system’s pharmacists, who reported feeling more connected to the providers and the decision-making process when they had access to the early warning system.

In probing the sepsis model, Tarabichi and colleagues waded into an ongoing conversation about the benefits of artificial intelligence in general, and of the Epic sepsis model in particular. A study published earlier this year by the University of Michigan, which retrospectively examined sepsis cases at the university’s health system, found Epic’s system would have missed a significant number of patients while generating a high number of alerts and creating a risk of alert fatigue.

Tarabichi, however, said his study differs from the Michigan validation study because it does not rely on a retrospective analysis, but instead looked at how the system worked prospectively in the real world.

“The study that the validation study asks is, ‘How does the model do?,’ not, ‘How can a physician do with the model? Can a physician do better with the presence of a model?’,” he said.

Tarabichi said he hopes his study shows the importance of holding models like the sepsis product to high standards, which he said involves undertaking rigorous study to see how they perform in real-world settings.

“And you can't just do that with a validation study, you have to do that with a prospective study... and look at it in the real-world complex environment,” he said. “And ideally, if you can afford it, a randomized control study.”

MetroHealth continues to use the sepsis model in its emergency department, Tarabichi said.