ACRyLIQ: Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment

Umair Ul Hassan*, Edward Curry, Amrapali Zaveri, Edgard Marx, Jens Lehmann

*Corresponding author for this work

Research output: Contribution to conferencePaperAcademic

Abstract

Crowdsourcing has emerged as a powerful paradigm for quality assessment and improvement of Linked Data. A major challenge of employing crowdsourcing, for quality assessment in Linked Data, is the cold-start problem: how to estimate the reliability of crowd workers and assign the most reliable workers to tasks? We address this challenge by proposing a novel approach for generating test questions from DBpedia based on the topics associated with quality assessment tasks. These test questions are used to estimate the reliability of the new workers. Subsequently, the tasks are dynamically assigned to reliable workers to help improve the accuracy of collected responses. Our proposed approach, ACRyLIQ, is evaluated using workers hired from Amazon Mechanical Turk, on two real-world Linked Data datasets. We validate the proposed approach in terms of accuracy and compare it against the baseline approach of reliability estimate using gold-standard task. The results demonstrate that our proposed approach achieves high accuracy without using gold-standard task.
Original languageEnglish
DOIs
Publication statusPublished - 2016
Externally publishedYes
Event20th International Conference on Knowledge Engineering and Knowledge Management (EKAW), - Bologna, Italy
Duration: 19 Nov 201623 Nov 2016

Conference

Conference20th International Conference on Knowledge Engineering and Knowledge Management (EKAW),
Country/TerritoryItaly
CityBologna
Period19/11/1623/11/16

Keywords

  • MOLE group_aksw lehmann marx mole simba zaveri

Fingerprint

Dive into the research topics of 'ACRyLIQ: Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment'. Together they form a unique fingerprint.

Cite this