Skip to content

Crowdsourcing citation screening in a mixed-studies systematic review: a feasibility study

Citation:

Noel-Storr, A.H., Redmond, P., Lamé, G. et al. Crowdsourcing citation-screening in a mixed-studies systematic review: a feasibility study. BMC Med Res Methodol 21, 88 (2021). https://doi.org/10.1186/s12874-021-01271-4

High quality systematic reviews are a powerful tool for locating and synthesising existing evidence. Such reviews are also immensely time-consuming: it can take months for a small research team to comb through literature searches to decide what publications to include within a review. One approach to speed up this screening process is to use “crowdsourcing”, asking interested volunteers to examine titles and abstracts to locate publications which may be relevant. Cochrane, one of the most well-known producers of systematic reviews of randomised controlled trials, has been a pioneer in this field: since its launch in May 2016, their Cochrane Crowd citizen science platform has enabled over 18,000 people from 158 countries to classify over 4.5 million records.

To date, much of these crowdsourcing experiments have focused on how to support traditional systematic reviews, which look to bring together evidence of effectiveness from trials. In healthcare improvement studies, the reviews we want to conduct are often much more complex, addressing research questions which require us to find and synthesise a wide range of evidence. In this feasibility study, we wanted to investigate whether a crowd could help undertake citation screening for a complex systematic review which included multiple study designs.

In this review, we had 9,546 records of titles and abstracts which needed to be screened. Whilst the review team screened these in their usual way, we also asked a crowd of non-specialists registered with the Cochrane Crowd platform to screen these records and decide which ones were potentially relevant. The crowd correctly identified 84% of studies included within the final review by the review team, and correctly identified 99% of excluded studies. All this was done in 33 hours, compared to the 410 hours it took the review team – although crowd contributors did, on average, take longer to screen an individual record compared to a review team member. We made a few adjustments to the crowd screening process and repeated this experiment: this time around, the crowd’s ability to identify studies for inclusion increased to 96%.

We were encouraged by the insights generated from this feasibility study, but of course many questions remain. Quicker screening raises the potential for time and cost savings, but does not account for the time taken to design, build and pilot the training and instructions for the review. We need to look in more detail at the trade-off between speed of crowd screening and the resources to enable crowd screening. We also need to question the traditional approach to screening for inclusion, which is something we are now working on.

Related content from our open-access series, Elements of Improving Quality and Safety in Healthcare

Sign up to receive the latest news, reports and articles from THIS Institute.