Predictive analytics in health care: how can we know it works?

Ben Van Calster*, Laure Wynants, Dirk Timmerman, Ewout W. Steyerberg, Gary S. Collins

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

41 Citations (Web of Science)

Abstract

There is increasing awareness that the methodology and findings of research should be transparent. This includes studies using artificial intelligence to develop predictive algorithms that make individualized diagnostic or prognostic risk predictions. We argue that it is paramount to make the algorithm behind any prediction publicly available. This allows independent external validation, assessment of performance heterogeneity across settings and over time, and algorithm refinement or updating. Online calculators and apps may aid uptake if accompanied with sufficient information. For algorithms based on "black box" machine learning methods, software for algorithm implementation is a must. Hiding algorithms for commercial exploitation is unethical, because there is no possibility to assess whether algorithms work as advertised or to monitor when and how algorithms are updated. Journals and funders should demand maximal transparency for publications on predictive algorithms, and clinical guidelines should only recommend publicly available algorithms.

Original languageEnglish
Pages (from-to)1651-1654
Number of pages4
JournalJournal of the American Medical Informatics Association
Volume26
Issue number12
DOIs
Publication statusPublished - Dec 2019

Keywords

  • artificial intelligence
  • external validation
  • machine learning
  • model performance
  • predictive analytics
  • ARTIFICIAL-INTELLIGENCE
  • RISK PREDICTION
  • CANCER
  • MODELS

Cite this