Clinical Guidelines in the Face of Caring for Patients with Emerging Diseases

Article

In the age of COVID-19, providers caring for patients with emerging diseases do not rely entirely on clinical guidelines, but also consult online resources that are updated more frequently. This needed integration helps providers adapt to the best-available evidence into bedside care when guidance is lacking.

doctor looking online

There is an old saying in medicine that you should never be the first nor the last doctor to prescribe a medication. Waiting to follow the official guidelines produced by professional societies will ensure you are never the first, but adhering closely to those guidelines might make you the last.

The COVID-19 pandemic has revealed the inadequacy of a wait-and-see approach in the setting of rapid change under urgent circumstances. The recommendations of the US Food and Drug Administration, Centers for Disease Control and Prevention, and/or National Institutes of Health were sometimes too quick (think hydroxycholoroquine, convalescent plasma), sometimes too slow (think rolling out rapid and freely available testing,) and sometimes just about right (think vaccine authorization and advocacy). The fact that these agencies saw the need for rapid action before the acquisition of incontrovertible evidence points the way toward reform of the mechanism for developing guidelines—and interpreting appropriateness of care—in general. As we progress toward a learning healthcare system with rapid generation of knowledge, we simultaneously need to move toward thinking about clinical guidelines as living, dynamic documents and pivot toward an open source approach.

The Existing Guideline Paradigm

Traditionally, committees of experts are recruited to issue guidelines follow criteria for evidence-based medicine and use methods for systematic review of the literature, judging levels of evidence, which are then turned into strong or conditional recommendations by the experts. The GRADE system and/or PICO framework are often used to provide an objective veneer to what is, in the end, an exercise suffused by subjectivity, because many clinically relevant questions have not been addressed—and may never be addressed—scientifically. The intrusion of subjectivity, however, is preferable to the alternative: avoiding any question for which there is no definitive answer. The most well-reasoned guidelines do not confuse “evidence of absence” with “absence of evidence,” and only strongly recommend against a reasonably safe intervention if there is evidence against it being effective. Sometimes, a question with no answer supported by high-level evidence is critical enough that dismissing it with “there is no evidence to suggest” may be dangerous.

Among interventions for controlling COVID-19, the only one that would make the GRADE as being based on high-level evidence of high effectiveness is vaccination to prevent infection, especially severe infection. Other suggestions could only be conditionally recommended for various reasons, even some of the ones that receive near-universal approval, such as improving ventilation to reduce spread (limited to experimental simulations, modeling and observational studies), and dexamethasone for severe disease (only one large randomized trial). Different committees could legitimately include or exclude all the following: hand sanitizer, remdesivir, antibody therapy, baricitinib, tocilizumab, colchicine, and fluvoxamine.

Guideline committees often make conditional recommendations on the basis of no evidence, especially if the intervention is viewed as relatively benign and the disease severe, and sometimes will rank interventions that have equivocal published evidence for no reason other than experience or bias. Proof of this assertion lies in the stark differences in the recommendations made by different groups of experts for the same disease.

Authoritative guidelines usually lag well behind the publication of evidence, and recently behind the dissemination of evidence, the pace of which has increased with the rise of pre-print servers and social media. Assembling a committee of unpaid and unbiased experts to review hundreds or thousands of articles and summarize the findings takes time. Organizing the experts’ recommendations through several rounds of review and comment takes even more time.For common conditions with a strong and stable evidence base, this delay may be acceptable, but a lot can happen in 5-10 years, especially in emerging diseases, but also in uncommon diseases in which pivotal trials are uncommon.

For example, official recommendations from European Alliance of Associations for Rheumatology (EULAR) for the treatment of Wegener’s granulomatosis were first issued in 2009. Pivotal clinical trials that changed standard of care were published in 2010, with FDA approval within a year. Revised EULAR recommendations were published in 2015. Parenthetically, even the name of the disease was changed in the interim, to granulomatosis with polyangiitis. Nevertheless, this is not a criticism. The American College of Rheumatology published its first guidelines for treatment of this disease in 2021.*

In part because of these delays, physicians caring for patients with rare or emerging diseases do not rely entirely on guidelines of professional societies, but instead consult a handful of online resources that are updated more frequently, or review articles, or individual research studies—or ask a colleague who keeps closer track. All these approaches introduce personal bias that the process of producing official guidelines seeks to avoid, but they attempt to integrate the best-available evidence into bedside care when other guidance is lacking.

Issuing authoritative guidelines along the timeline of a national census could also introduce problems that did not exist previously. Guidelines may be used by insurers to avoid paying for new but expensive therapies. They may be used to evaluate quality of physicians’ care—with associated financial implications—by standards that are out of date or have a very limited scientific basis. And their existence could delay efforts to translate new, cutting-edge findings that truly revolutionize care, if those who specialize in implementation science wait until an intervention is included in the guidelines before pursuing strategies to increase its uptake into clinical care. Whether and how much the process of waiting for guideline updates contributes to the well-known 17-year lag between evidence generation and adoption into clinical care is a question worth pursuing, with major implications for how we think about the process of active implementation.

If attention is not paid to areas of uncertainty and disagreement, a document endorsed by a monolithic professional society or other guideline-issuing body runs a risk of being dogmatic and inflexible, and of suggesting an issue is settled when the evidence basis is still evolving. Imagine someone designing an autonomous vehicle that is incapable of running a red light under any circumstance. The fact that it’s quite easy to imagine someone doing that is troubling and underscores the principle, and a consensus committee might not correct the error.

Possible Solutions

To begin with, an acknowledgement of areas of uncertainty is key. Rather than the standard GRADES of evidence, guidelines could have a vote tally—so that any reader can see where there was broad agreement about an intervention, but also see areas where experts did not really reach consensus—areas for which there is legitimate scientific debate and genuine uncertainty about the best course of action. With a “vote count” of sorts, readers could know if something falls in the realm of “absolutely should do” or “some would suggest doing” — in other words, it would allow for the addition of shades of grey to what is currently a process that is presented as black and white.

Although generally applied to research data sources, an Open Source Science Framework could be applied to the concept of guideline creation and dissemination. A systematic review, once completed, could serve as the bedrock upon which all new evidence is added, quickly, just as agencies attempted to do in the case of COVID-19. Literature searches and notes from the panel discussions underpinning the guidelines could be made publicly available, with non-members allowed to make comments, evaluate the evidence, and to suggest changes.

Changes or updates could be made when a critical mass of evidence suggests change is needed, either in the strength or direction of a recommendation, and also at scheduled intervals. As long as the guideline updates continue to be slow and intermittent, there is the risk that they will remain anchored on older, lower quality data and we will delay the translation of science into real-world policy and bedside clinical care.

Guidelines from authoritative sources, if they are to remain relevant, should do so in a 21st century construct—living documents and reviews of scientific evidence that are updated in close to real-time when new data support a change. More information about the degree of uncertainty and disagreement about specific recommendations should be included, and a degree of humility about the possibility that recommendations may change—perhaps even quickly—should be included in the official recommendations. Generation and dissemination of evidence continues to accelerate—and the basic construct of clinical guidelines remains static in a dynamic system. Which is why clinical guidelines should always be “pending an update.”

*One of us was a reviewer of the paper that will provide these guidelines, chosen because he was one of a small number of experts in the US who declined the invitation to be involved in creating the guidelines. Ironically, were he to use his position as a peer reviewer in a nefarious fashion without intervention by an editor, he could have more influence on the guidelines than any of the members of the committee who actually participated. As it turned out, the guidelines were not changed on the basis of his very sound criticisms, probably because the guidelines had been written by a committee following an algorithm, which brings up the question whether such documents should be regarded as peer-reviewed, and the extent to which peer review improves the reporting of research studies, rather than merely pushing them toward the biases of 2-3 anonymous reviewers. A subject for another time.

Recent Videos
© 2024 MJH Life Sciences

All rights reserved.