TY - JOUR
T1 - Evaluating FAIR maturity through a scalable, automated, community-governed framework
AU - Wilkinson, Mark D.
AU - Dumontier, Michel
AU - Sansone, Susanna-Assunta
AU - Santos, Luiz Olavo Bonino Da Silva
AU - Prieto, Mario
AU - Batista, Dominique
AU - McQuilton, Peter
AU - Kuhn, Tobias
AU - Rocca-Serra, Philippe
AU - Crosas, Merce
AU - Schultes, Erik
N1 - Funding Information:
M.D.W. is funded by the Isaac Peral/Marie Curie cofund with the Universidad Politécnica de Madrid, Ministerio de Economía y Competitividad grant number TIN2014-55993-RM, and the European Joint Programme on Rare Diseases (H2020-EU 825575). Throughout phase 1 and 2 of the work, S.A.-S., P.MQ., D.M. and PRS have been funded by grants awarded to S.A.-S. from the UK BBSRC and Research Councils (BB/L024101/1; BB/L005069/1), EU (H2020-EU 634107; H2020-EU 654241; H2020-EU 676559; H2020-EU 824087), IMI (116060; 802750), NIH (U54 AI117925; 1U24AI117966-01; 1OT3OD025459-01; 1OT3OD025467-01, 1OT3OD025462-01), and from the Wellcome Trust (212930/Z/18/Z; 208381/A/17/Z). MD is supported by grants from NWO (400.17.605; 628.011.011), NIH (3OT3TR002027-01S1; 1OT3OD025467-01; 1OT3OD025464-01), and ELIXIR, the research infrastructure for life-science data. MP was supported by the UPM Isaac Peral/Marie Curie cofund, and funding from the Dutch Techcenter for Life Sciences DP. LOBS and ES are supported by the Dutch Ministry of Education, Culture and Science (Ministerie van Onderwijs, Cultuur en Wetenschap), Netherlands Organisation for Scientific Research (Nederlandse Organisatie voor Wetenschappelijk Onderzoek), and the Dutch TechCenter for Life Sciences. We thank the NBDC/DBCLS BioHackathon series where many of these MIs and their tests were designed, and we particularly wish to acknowledge the participation of the Dataverse team, especially Julian Gautier and Derek Murphy, at IQSS, Harvard, in addition to Todd Vision from Data Dryad.
Publisher Copyright:
© 2019, The Author(s).
PY - 2019/9/20
Y1 - 2019/9/20
N2 - Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine "sees" when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.
AB - Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine "sees" when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.
U2 - 10.1038/s41597-019-0184-5
DO - 10.1038/s41597-019-0184-5
M3 - Article
C2 - 31541130
SN - 2052-4463
VL - 6
JO - Scientific data
JF - Scientific data
IS - 1
M1 - 174
ER -