A new study argues artificial intelligence will be a key tool in helping to evaluate the new barrage of new scientific research.
As investigators around the world race to develop therapies and vaccines in response to the coronavirus disease 2019 (COVID-19) epidemic, scientists at Northwestern University said artificial intelligence can play a key role in helping to sort through the thousands of studies being reported.
In a new paper published in the Proceedings of the National Academy of Sciences, Brian Uzzi, PhD, and colleagues said existing methods of locating studies that are most likely to be replicable will not work in an emerging pandemic.
They noted the current system for evaluating studies, the Defense Advanced Research Project Agency’s Systematizing Confidence in Open Research and Evidence (DARPA SCORE) program, relies on human expertise, and tends to take nearly a year.
“The standard process is too expensive, both financially and in terms of opportunity costs,” Uzzi said in a statement.
In addition to slowing down the research being evaluated, the process also takes up the precious time of the scientists reviewing the studies, who could otherwise be conducting their own research.
Instead, Uzzi and colleagues proposed a new AI tool, which uses an algorithm to determine which studies are most likely to be replicable and thus worthy of further study. They said the system performs better than the base rate of human reviewers, and comparably to prediction markets, which they say are the current best-available method for predicting replicability.
The model had accuracy levels of 0.65 to 0.78 in out-of-sample tests on manually replicated papers, investigators reported.
Notably, the study goes beyond examining the data, to also consider the narratives of the studies it evaluates. It found that evaluating the narrative gave them a higher accuracy rate than simply looking at statistics.
Uzzi said authors’ explanations of their research include important signals as to the quality of the findings.
“The words they use reveal their own confidence in their findings, but it is hard for the average human to detect that,” he said.
The authors added that because their algorithm can process vast amounts of data quickly, it can recognize word-choice patterns that have meaningful correlations with replicability. However, the investigators said their system does not need to fully replace human input. Rather, it can be a supplemental tool.
For instance, it can help prioritize which studies are worth the time of human reviewers. It can also be used to provide an evaluation to investigators before they submit their work to journals.
“This tool is particularly useful in this crisis situation where we can’t act fast enough,” Uzzi said. “It can give us an accurate estimate of what’s going to work and not work very quickly. We’re behind the ball, and this can help us catch up.”
Uzzi hopes his team’s AI tool can help to speed up the amount of time it takes for promising therapies and/or vaccines for COVID-19 to get to the market. However, he added the AI system can be a powerful way to identify research that is invalid and might lead to harmful misconceptions.
“This is important not only to save lives, but also to quickly tamp down the misinformation that results from poorly conducted research,” he said.