AI tool speeds up search for COVID-19 treatments and vaccines

AI tool speeds up search for COVID-19 treatments and vaccines

Posted by
Spread the love
AI
Credit: CC0 Public Domain
Earn Bitcoin
Earn Bitcoin

Northwestern University researchers are using artificial intelligence (AI) to speed up the search for COVID-19 treatments and vaccines. The AI-powered tool makes it possible to prioritize resources for the most promising studies—and ignore research that is unlikely to yield benefits.

In the midst of the pandemic, scientific research is being conducted at an unprecedented rate. The Food and Drug Administration and the U.S. Department of Health and Human Services announced plans to accelerate clinical trials, and hundreds of scientists are investigating possible treatments and vaccines.

But the question remains: Which research has the most potential to produce real, much-needed solutions?

The scientific community has been predicting the answer to such questions for decades using the Defense Advanced Research Projects Agency’s Systematizing Confidence in Open Research and Evidence (DARPA SCORE) program. The program relies on scientific experts to review and rate submitted research studies based on how likely they are to be replicable. On average, this process takes about 314 days—a long wait in the midst of global pandemic.

The machine model is just as accurate as the human scoring system at making such predictions, researchers said, and it can scale up to review a larger number of papers in a fraction of the time—minutes instead of months.

“The standard process is too expensive, both financially and in terms of opportunity costs,” said Northwestern’s Brian Uzzi, who led the study. “First, it takes take too long to move on to the second phase of testing and second, when experts are spending their time reviewing other people’s work, it means they are not in the lab conducting their own research.”

With their new AI tool, Uzzi and his team at the Kellogg School of Management bypass the human-scoring method, allowing the research community and policymakers to make faster decisions about how to prioritize time and funding on the studies that are most likely to succeed.

Uzzi is the corresponding author on the paper, titled “Estimating the ‘Deep-Replicability’ of Scientific Findings Using Human and Machine Intelligence,” which will be published the week of May 4 in PNAS.

“In the midst of a public health crisis, it is essential that we focus our efforts on the most promising research,” said Uzzi, the Richard L. Thomas Professor of Leadership at Kellogg and co-director of the Northwestern Institute on Complex Systems. “This is important not only to save lives, but also to quickly tamp down the misinformation that results from poorly conducted research.”

How it works

The team of Northwestern researchers developed an algorithm to predict which studies’ results are most likely to be replicable. Replication, which means that the results of the study can be produced a second time with a new test population, is a key signal that study conclusions are valid.

The machine model’s prediction of the likelihood of replicability may actually be more accurate than the traditional human-scoring prediction, researchers said, because it considers more of the narrative of the study, while expert reviewers tend to focus on the strength of the relational statistics in a paper.

“There is a lot of valuable information in how study authors explain their results,” Uzzi said. “The words they use reveal their own confidence in their findings, but it is hard for the average human to detect that.”

Because the algorithm examines the words of thousands of papers, it recognizes word-choice patterns that might be hidden from human consciousness. It has a much bigger schema to draw upon for its predictions, which makes it an extraordinary partner for human reviewers, Uzzi said.

The researchers’ model can be used immediately to analyze COVID-related research papers and quickly determine which show the most promise.

“This tool is particularly useful in this crisis situation where we can’t act fast enough,” Uzzi said. “It can give us an accurate estimate of what’s going to work and not work very quickly. We’re behind the ball, and this can help us catch up.”

Used on its own, the model has comparable accuracy to the DARPA SCORE method. Paired together, the combination human-machine approach predicts which findings will be replicable with even greater accuracy than either method on its own, the researchers found.

“This tool will help us conduct the business of science with greater accuracy and efficiency,” Uzzi said. “Now more than ever, it’s essential for the research community to operate lean, focusing only on those studies which hold real promise.”

Northwestern University