Citation screening using crowdsourcing and machine learning produced accurate results: Evaluation of Cochrane's modified Screen4Me service

A. Noel-Storr*, G. Dooley, L. Affengruber, G. Gartlehner

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Objectives: To assess the feasibility of a modified workflow that uses machine learning and crowdsourcing to identify studies for potential inclusion in a systematic review.Study Design and Setting: This was a substudy to a larger randomized study; the main study sought to assess the performance of single screening search results versus dual screening. This substudy assessed the performance in identifying relevant randomized controlled trials (RCTs) for a published Cochrane review of a modified version of Cochrane's Screen4Me workflow which uses crowdsourcing and machine learning. We included participants who had signed up for the main study but who were not eligible to be randomized to the two main arms of that study. The records were put through the modified workflow where a machine learning classifier divided the data set into "Not RCTs"and "Possible RCTs."The records deemed "Possible RCTs"were then loaded into a task created on the Cochrane Crowd platform, and participants classified those records as either "Potentially relevant"or "Not relevant"to the review. Using a prespecified agreement algorithm, we calculated the performance of the crowd in correctly identifying the studies that were included in the review (sensitivity) and correctly rejecting those that were not included (specificity).Results: The RCT machine learning classifier did not reject any of the included studies. In terms of the crowd, 112 participants were included in this substudy. Of these, 81 completed the training module and went on to screen records in the live task. Applying the Cochrane Crowd agreement algorithm, the crowd achieved 100% sensitivity and 80.71% specificity.Conclusions: Using a crowd to screen search results for systematic reviews can be an accurate method as long as the agreement algorithm in place is robust.Trial registration: Open Science Framework: https://osf.io/3jyqt. (c) 2020 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Original languageEnglish
Pages (from-to)23-31
Number of pages9
JournalJournal of Clinical Epidemiology
Volume130
DOIs
Publication statusPublished - 1 Feb 2021

Keywords

  • accuracy
  • agreement algorithm
  • crowdsourcing
  • human computation
  • literature screening
  • machine learning
  • systematic reviews
  • Crowdsourcing
  • Human computation
  • Accuracy
  • Agreement algorithm
  • Machine learning
  • Literature screening
  • Systematic reviews

Fingerprint

Dive into the research topics of 'Citation screening using crowdsourcing and machine learning produced accurate results: Evaluation of Cochrane's modified Screen4Me service'. Together they form a unique fingerprint.

Cite this