Crowdsourcing citation screening in a mixed-studies systematic review: a feasibility study

Published in

Contributors

High quality systematic reviews are a powerful tool for locating and synthesising existing evidence. Such reviews are also immensely time-consuming: it can take months for a small research team to comb through literature searches to decide what publications to include within a review. One approach to speed up this screening process is to use “crowdsourcing”, asking interested volunteers to examine titles and abstracts to locate publications which may be relevant. Cochrane, one of the most well-known producers of systematic reviews of randomised controlled trials, has been a pioneer in this field: since its launch in May 2016, their Cochrane Crowd citizen science platform has enabled over 18,000 people from 158 countries to classify over 4.5 million records.

To date, much of these crowdsourcing experiments have focused on how to support traditional systematic reviews, which look to bring together evidence of effectiveness from trials. In healthcare improvement studies, the reviews we want to conduct are often much more complex, addressing research questions which require us to find and synthesise a wide range of evidence. In this feasibility study, we wanted to investigate whether a crowd could help undertake citation screening for a complex systematic review which included multiple study designs.

In this review, we had 9,546 records of titles and abstracts which needed to be screened. Whilst the review team screened these in their usual way, we also asked a crowd of non-specialists registered with the Cochrane Crowd platform to screen these records and decide which ones were potentially relevant. The crowd correctly identified 84% of studies included within the final review by the review team, and correctly identified 99% of excluded studies. All this was done in 33 hours, compared to the 410 hours it took the review team – although crowd contributors did, on average, take longer to screen an individual record compared to a review team member. We made a few adjustments to the crowd screening process and repeated this experiment: this time around, the crowd’s ability to identify studies for inclusion increased to 96%.

We were encouraged by the insights generated from this feasibility study, but of course many questions remain. Quicker screening raises the potential for time and cost savings, but does not account for the time taken to design, build and pilot the training and instructions for the review. We need to look in more detail at the trade-off between speed of crowd screening and the resources to enable crowd screening. We also need to question the traditional approach to screening for inclusion, which is something we are now working on.

Read the journal article

 

Related content from our open-access series, Elements of Improving Quality and Safety in Healthcare

Collaboration-Based Approaches

Licensed under Creative Commons

These symbols show that the contents of this page are published under a Creative Commons licence called CC-BY-NC-ND 4.0.

It means that you’re free to reuse this work. In fact, we encourage it. We want our research to reach people who can help improve quality and safety in healthcare. But we do have a few rules:

  • Make sure you acknowledge The Healthcare Improvement Studies Institute (THIS Institute) as the creator and link back to this webpage.
  • You can’t sell this work for a fee, or use it for any activity that generates revenue without our permission.
  • Please don’t distribute a modified version to others without our permission.

You can read the fine print about the licence on the Creative Commons website. It’s meant to help us keep the integrity of our work and stay true to our values.

But ultimately we want our work to have impact. So if you’ve got a use in mind but you’re not sure it’s allowed, just ask us at enquiries@thisinstitute.cam.ac.uk