New Technologies for Detecting Antimicrobial Susceptibility and Resistance

Contagion, February 2017, Volume 2, Issue 1

Romney Humphries, PhD, D(ABMM), section chief of Clinical Microbiology at the University of California, Los Angeles, sat down with Contagion™ to discuss antimicrobial susceptibility and resistance and new technologies in the field.

During the American Society for Microbiology Microbe 2016 conference held in Boston, Massachusetts, from June 16, 2016 to June 20, 2016, Contagion™ interviewed Romney Humphries, PhD, D(ABMM), section chief of Clinical Microbiology at the University of California, Los Angeles, regarding antimicrobial susceptibility and resistance and new technologies in the field.

Can You Tell Us a Little About the New Technologies That We Should Expect to See on the Market Soon?

“Today, we have a lot of new technologies in development to detect antimicrobial resistance. A lot of focus has been placed on detecting resistance, but, really, the ability to detect susceptibility is equally important. The traditional methods that we use today are methods that were developed in the 1970s & 1980s and so it’s well past time for newer technology.

Ideally, a technology would be rapid so that we can detect resistance and susceptibility in our patients as soon as possible. Ideally, at the bedside. In this way, you could tailor therapy to the patient’s specific infection, right off the bat, rather than starting on very broad-spectrum therapy and having to de-escalate or escalate as susceptibility or resistance is detected.

This being said, using a phenotypic method is really important because it can accurately detect susceptibility. In this way, you observe how the bacteria behaves in the presence of the antimicrobial.

There are several new technologies in development, which is very exciting, that do this in a much more rapid way than the traditional methods that we use today. (Which, from collection of a patient’s specimen, to results, can be up to 5 days.) These methods are much more rapid.

The most recent [technology] is [developed by] Accelerate Diagnostics. They have a system called the pheno system ( and it does a susceptibility test directly from a positive blood culture. This method reviews the performance of the antimicrobial against the organism in real-time through microscopy. You’re actually imaging the cells and watching how they behave. They can do a susceptibility test within 6.5 to 7 hours.

Another method that is further along in development is through a company called LifeScale ( They actually weigh the bacteria through the use of a microfluidic channel with a cantilever and so as the organisms one-by-one enter through this channel, they cause a change in the cantilever vibrations so [lab technicians] can count the number of organisms. Of course, if the antimicrobial is killing the bacteria, there will be fewer organisms, and if it’s not, there will be lots of organisms as they continue to grow.

Both [of] those technologies we expect to see in the next year or two coming to the clinical labs.

There are several other technologies that have been developed more to identify bacteria rapidly, but could be adopted to do a susceptibility test. Two [of these] include the BacterioScan method (, which is an optical density measurement that counts bacteria. Again, you could count [the bacteria] in the presence of an antimicrobial, after a given time, to see if there is a reduction or an increase in the number of organisms, meaning they are either susceptible or resistant to the antimicrobial.

Another technology has recently been acquired by Roche that evaluates the viability of an organism, or whether it’s alive or dead, based on its ability to take up a gene which is delivered through a bioparticle that they engineer []. When [the bacteria] acquire this gene, it gives them the ability to produce light. You look for bacteria being alive [by their] ability to generate light, and as they die, they are no longer able to generate the light, and so the lights go off as the organism is killed by the antimicrobial. These bioparticles can be very specific to a given species of bacteria, or they can be much more broad and look at a whole family of bacteria, like the Enterobacteriaceae, for example.”

What are Some Challenges for Gaining FDA Approval for Antimicrobial Susceptibility Tests?

“One of the challenges as we look at these new technologies is the fact that in the United States, the gold standard for susceptibility testing is still this old method called broth microdilution, which was developed in the 1970s. You can imagine that as technology evolves we can become much more accurate at detecting resistance in a phenotypic way but you need to compare yourself to a much older and perhaps less well-performing system to get FDA approval from marketing of these tests in the United States; this is a big concern.

Another concern is the fact that the FDA has become much more stringent on which organisms a lab or a diagnostic manufacturer can test against certain antimicrobials. So, unless this specific bug-drug combination is listed in the clinical indications of a drug label, the FDA does not grant approval for a test to test for that combination.

We know that people use antimicrobials off-label all the time and in some cases, very frequently; a good example is the use of meropenem to treat Acinetobacter baumannii infections. That’s not actually a clinical indication in the meropenem drug label and so if you’re a new company trying to develop a new method, you would not be able to test that combination that’s being used every day.”

Can You Discuss the Different Genotypic and Phenotypic Approaches to Detect Antimicrobial Susceptibility?

“To date, new technology to detect antimicrobial susceptibility has taken one of two approaches: we can look for a gene in an organism that’s thought to predict its resistance to a given antimicrobial, so that’s the genotypic approach, or we can do a phenotypic method which answers the question, 'If I incubate [these] bacteria with this drug, does it prevent it from growing or is the organism resistant and can grow even in the presence of the drug?'

Both have advantages and disadvantages. To date, most technologies have focused on genotypic approaches, and so there are some challenges with that, particular[ly] for some organisms that we have real problems with today, like the gram-negative bacteria. By looking for just a single gene, you don’t get the entire story of what’s happening in that bacterium. We know in gram-negative infections resistance is usually multi-factorial [with] a combination of [the] presence of a gene, its expression, as well as other factors, like porins or efflux; and so, by asking 'is that gene there?' you don’t know what else is going on. When you detect a gene, you must assume resistance, but we know that often the gene will be present and there will be no resistance; and so, this is one big challenge. You’re not able to detect susceptibility using a genetic approach, you’re only able to assume resistance.

The other challenge with it is there’s a disconnect between what we use, called a clinical breakpoint, versus a genotypic method. When you’re [using] a genotypic method, you’re looking to see if the organism is wild-type, natural state, or if it's acquired a resistance mechanism. [However], that doesn’t necessarily mean that it will have an MIC that’s above the clinical breakpoint and so you still may be able to treat some of these infections even though they have a resistance gene. We’ve seen that numerous times in our lab. When we started to do things like whole genome sequencing of bacteria, [we found] a lot of resistance genes but either they’re not being expressed or on their own do not contribute to a resistance phenotype.

For these reasons, I think that having a phenotypic approach is a lot more desirable. However, you need an awful lot of bacteria to [test using] a phenotypic approach. It also needs more time than we want to spend waiting for a result, because you need to give the organisms time to respond to the antimicrobial. So, those two things combined make it very difficult to make a very rapid, direct-from-patient specimen phenotypic test. Nonetheless, [with] a phenotypic test [it] does not matter which mechanism is causing the resistance phenotype; as we have new antimicrobials come to market, and we don’t know what those resistance mechanisms will be, a phenotypic approach should be able to detect resistance or susceptibility in those cases.”

How Do Clinical Trial Data Compare to Real-World Data?

“One of the challenges with all of these new technologies is they are very costly to implement, and so we’re talking about going from our traditional tests that labs perform today, which are in the order of $5 to $10 per test, to going to a test that costs in the order of $150 to $250 per organism. This is a big jump in price. And so, what we really need are good clinical outcome data to support the use of these new tests; however, to date, we’ve really been remiss and [have] not done these studies.

There’s only one randomized control trial done to date that looks at the impact of having a rapid susceptibility test. This was a study done at the Mayo Clinic by Ritu Banerjee and colleagues. In that study, they were able to show several benefits including reduced time on antimicrobials, less use of broad-spectrum antimicrobials, as well as preventing the treatment of contaminants, because [there is] a much more rapid answer [to] 'is this a pathogen or is this a contaminant?

Outside of that study, we really don’t have good outcome studies that show things like reduction in length of stay, reduction in ancillary testing while the patient’s in the hospital, [and] all these things that are really going to be important for labs to be able to justify the worth of bringing on these tests. Not to mention, we’ve yet to have a trial that’s powered enough to look at clinical outcome based on these results, so does it actually have an impact on mortality for these patients?”

How Can We Work to Minimize the Gap Between Trial Data and Real-World Outcomes?

I think that as diagnostic manufacturers are developing these technologies, it’s important for them to keep in mind that they need to have a clinical outcome portion to their trials.

Currently, trial design for in vitro diagnostic devices in the United States is predominantly comparing method A to method B in the lab, but it doesn’t always, or ever, step outside the lab to look at what could potentially happen with the patient with this test. I think through either clinical trials or follow-up studies; we need to have these real-world studies to evaluate randomizing patients to standard of care versus rapid diagnostics and how that impacts the overall outcomes and cost associated with the care of these patients.”

How Can Stewardship Recommendations Drive Change in Patient Management?

“We’ve done some work at UCLA looking at novel technologies, but to be honest, a lot of it has been done in the research lab because the business case that needs to be put together to bring these into the clinical lab and justify the cost for testing is pretty big.

We’ve done some work looking at, ironically, genotypic methods for organisms that we don’t grow routinely, including Neisseria gonorrhoeae. Through this, what we’ve learned is [that] while reporting the results of a rapid test in the electronic medical record, which is what all labs do to report their results, is good, it doesn’t really make much impact on how the physician manages that patient because they already have an idea when they ordered the test [of] how they’re going to be managing that patient.

Follow-up with the physician to let them of know of the results and perhaps [to] let them know of institutional or stewardship recommendations on how to respond to the results is really what’s going to drive change in the management of these patients.”