Steven Woloshin, MD, MS: Challenges of False Negative COVID-19 Tests

Video

Steven Woloshin, MD, MS, discusses a recently authored article in the New England Journal of Medicine describing the challenges and implications of false negative tests for SARS-CoV-2 infection.

Segment Description: Steven Woloshin, MD, MS, co-director of the Center for Medicine and Media at The Dartmouth Institute, discusses a recently authored article in the New England Journal of Medicine describing the challenges and implications of false negative tests for SARS-CoV-2 infection.

Interview transcript (modified slightly for readability):

Contagion®: Hi, I’m Allie Ward, editorial director of Contagion® and joining me today is Dr. Steven Woloshin, co-director of the Center for Medicine and Media at The Dartmouth Institute and founder of the Lisa Schwartz Program for Truth in Medicine.

Dr. Woloshin and colleagues recently authored a Perspective article in the New England Journal of Medicine describing the challenges and implications of false negative tests for SARS-CoV-2 infection.

Let’s dive right in. What prompted the NEJM Perspective article on the accuracy of SARS-CoV-2 diagnostic tests?

Steven Woloshin, MD, MD: Sure. I missed my mother. I wanted to visit her. She would be considered at higher risk because of her age and she lives in New Jersey and I wanted to visit her and I just wondered whether I should get tested or not. In my research, I’m often looking at US Food and Drug Administration (FDA) documents, mostly about drug approvals, and so naturally I went to read about the tests that have been authorized under Emergency Use Authorization (EUA). I started looking at them and the more that I read, the less convinced I was [and] the more concerned I was about the quality of the validation studies. I figured it would be good to communicate what I learned to my colleagues and others, and also [in] talking to people there was also some confusion about how to interpret test results in general. So the article also included a bit on how to calculate probabilities using the sensitivity/specificity of the test.

Contagion®: Do we have any estimates about just how common false negatives are for COVID diagnostic tests?

Woloshin: We do but I think there's a lot that we don't know, and the evidence is kind of spotty. What we know is if you look at the EUA, the companies are supposed to report sensitivity and specificity; it's a part of the clinical performance of the test. The problem is, often what they're presenting is the percent agreement between a new test and some previously authorized test. And those numbers usually come out to be…positive agreements or like in the 95% range. I think the sensitivities are often at 100% level. The problem is that that's not what clinical sensitivity and clinical specificity mean.

Because when I want to know how well a test works, I certainly want to know that if I have a specimen with the virus in it, that the test will identify, it will be positive. I want to know that if I have a specimen that doesn't have this virus in it or virus particles, whatever it is you're looking for, and I test it and it will be negative. That's really critically important.

But as a clinician, I want to know if I have a patient who had COVID, and I test them, will the test be positive? And that’s really different. And the reason it's different is because, in the first case, when it has no positive and no negative specimens, I've taken out a lot of uncertainty that exists in clinical practice because when I see a patient, I have to take the sample, so it might be that long, it might be an inadequate sample. It has to be processed. It has to be transported to the lab. All that sort of stuff might cause problems. Once it gets the lab, the test itself might not function. And so with all these things, they're always opportunities for error.

The way the EUA, the percent agreement approach, works, you miss out on all that pre-analytic phase, all that stuff happens before the actual test is done. And that means that the sensitivity of the test is overestimated. Because swabs might not be done right, they might not pick up the virus or might be done at the wrong time, early on in a latent infection when the person isn’t producing a lot of virus particles, or it may be processed incorrectly, handled incorrectly, whatever. You need to account for all that stuff.

None of the EUAs that we looked at provide that kind of information. [In] published studies—some are preprints, some are actually published—there’s been an attempt to try to get at this clinical sensitivity. In our article, we cite a meta-analysis on the 5 trials, and they gave a range of false negative rates of 2% to about 30%. And most recently, just 2 weeks ago, there was an article in the Annals of Internal Medicine, which looked at false negative rates by time from the time of exposure, and all the time costs of the infection. They looked at 7 studies, and they found ranges of false negative rates, they did some modeling…Sensitivity varies according to time so at the time that you’re initially exposed, there is very little virus in your body, so the false negative rate is really high. Over time, over maybe 5 days or so until symptoms start to appear, the false negative rate drops. And then by about 3-4 days after symptoms appear, the false negative rate gets to its lowest point, but still about 20%. It matters what the timing of when the sample is drawn.

The bottom line is that the evidence is mixed. It's maybe not the best evidence. It sounds like the best guess is that, on average, the sensitivity of the PCR test is probably in the 70% range. That's the figure that you often see, but this is one of the things that would really be great if we had better evidence, so if the test was subjected to better tests that recorded the real clinical sensitivity and specificity in the EUA document.

Contagion®: In what ways can SARS-CoV-2 diagnostic tests produce inaccurate results?

Woloshin: The problem is if the tests were perfect, then if you tested positive you know the person is infected. If you tested negative, you know they’re not infected. That’d be great. But no test is perfect. You have to account for that in how you interpret the results. Because of the false negative rate, when you have a negative test, you may not have ruled out infection. And that matters. If the test has a really low sensitivity, it's possible…and that matters because if you tell someone they’re negative, they may think that they're clear and they can go out and do what they want. They're not infecting other people and that's not the case. Because if there are people who have false negative results, and they go out and they don't practice social distancing or wear a mask or go into high-risk environments like a nursing home or something, they may be able to transmit the disease even if they feel well. It's a real important issue. That why it's so important to get a handle on what the false negative rate really is in order to intelligently interpret negative results.

Contagion®: Complicating matters is the speed with which manufacturers are creating these tests and the rate of the FDA’s Emergency Use Authorizations. How do we balance the need for quickly getting these diagnostic products to market in a pandemic situation with the quality control aspect of making sure the test are producing accurate results? I imagine there must be some level of sacrifice on quality control for the sake of speed.

Woloshin: That's a great question and everyone always criticizes the FDA for being too fast and too slow and I've had a lot of sympathy for them because I think they're trying to do the best they can. In the beginning of the pandemic, there was a desperate need to get testing out there. It makes sense that the EUA allowed the companies to expedite the process. Now, when you have a whole bunch of tests out there, I think the last time I looked there was maybe 100 tests, not all PCR tests but quite a few…Now, I think maybe it makes sense to start raising the bar, and even going back to the authorized testing and asking for more clinically relevant.

Contagion®: You also point out that we need a reference standard for measuring the sensitivity of SARS-CoV-2 tests in asymptomatic patients. Why is that an urgent issue?

Woloshin: The first part of the question is easy. The second part is hard. It matters because there’s this idea that testing is a ticket out of this situation because, by testing, we're going to identify people who are infected, and quarantine them, and keep them away from people, from spreading the virus, particularly to people who are high risk. But because we know that there is transmission of people who have no symptoms, either they're asymptomatic—they never develop symptoms through the course of their infection—or they’re pre-symptomatic—they're tested before they have symptoms but they’re infectious—it’s crucial to know how well the tests work on people without symptoms. And none of the EUAs we looked at evaluated tests in that population, so we just don't know. You have people without symptoms who may be different than people with symptoms, the viral loads may be different, the transmission may be different, the infectivity may be different. We don’t know. And that's crucial information that we need to know. There’s a lot to know about this virus and this is a crucial thing that we need to get a handle on in order to make intelligent decisions about how to approach testing.

Related Videos
© 2024 MJH Life Sciences

All rights reserved.