Data Collection Gaps Are Damning the Ebola Outbreak
Variances in collection and siloed efforts are hurting this outbreak response.
Late last week 2 Ebola virus disease (EVD) cases were confirmed in the South Kivu region of the Democratic Republic of the Congo (DRC), some 400 miles away from where the outbreak first began. The cases were reported in a woman, who had been vaccinated, and her child who had traveled from Beni. The government is currently working to vaccinate and monitor 120 contacts of these 2 individuals.
In the face of this expanding outbreak that has surpassed 2700 confirmed cases, there has been much attention on the drug and vaccine trials that are ongoing. Unfortunately, in the fervor of excitement surrounding the promise of treatment, few have paid attention to the quality of data that is made available. Pierre Rollin, MD, a veteran Ebola fighter, recently drew attention to some deeply concerning issues in the outbreak response in an article in The Lancet Infectious Diseases.
Rollin underscored that although there was initial confidence in the response to the outbreak, mostly due to therapeutics and experienced personnel, leadership and coordination failures amid a conflict zone and community mistrust all helped the outbreak spiral. One component of Rollin’s review is deeply concerning—the “ineffectiveness of the collection, analysis, and diffusion of epidemiological data, the centerpiece of any response, is predictive of the situation worsening.” Similar to what was felt by many on the ground during the 2014-2016 West African Ebola outbreak, the various databases between agencies, groups, etc., all made situational awareness and response that much more challenging.
What is so worrisome about these database variances and gaps is that case counts are off and, therefore, monitoring is impacted. Perhaps one of the most startling findings was that most of the probable cases are not being recorded. A lack of data on probable cases means that not only do we not have an accurate understanding of the outbreak, but that intervention for those cases is likely not happening. Moreover, Rollin notes that the research efforts (vaccine, treatment, etc.) have their own data collection practices that are focused on the scientific outcome and research publications agenda, rather than public health. In addition to these critical data variances, “Laboratory diagnosis is done at multiple sites by the Institut National de Recherches Biologiques. This achievement is tempered by overconfidence in the capabilities of the laboratories, incomplete sharing of results with patient providers, low quality-control procedures, and unjustified fear of losing control.”
Now, consider that the data collected and generated by the vaccination and contact tracing teams isn’t being shared. The findings within Rollin’s research are still deeply worrisome and shed light on the broken response efforts that are only fueling the outbreak. How can we effectively perform contact tracing without the relevant vaccination data? Or vice versa? How can we possibly understand the magnitude of the cases if most of the probable cases aren’t being reported? Although this problem isn’t entirely novel in outbreaks involving multiple agencies, organizations, and even companies, Rollin’s point should not be ignored in the face of perceived therapeutic breakthroughs. Response efforts must include not only resolving these data and collection issues, but also working with local providers to ensure community trust is re-established, reducing health care worker transmission, and ensuring efforts are cohesive and collaborative.