Machine learning algorithms for outcome prediction in (chemo)radiotherapy: An empirical comparison of classifiers

Timo M. Deist*, Frank J. W. M. Dankers, Gilmer Valdes, Robin Wijsman, I-Chow Hsu, Cary Oberije, Tim Lustberg, Johan van Soest, Frank Hoebers, Arthur Jochems, Issam El Naqa, Leonard Wee, Olivier Morin, David R. Raleigh, Wouter Bots, Johannes H. Kaanders, Jose Belderbos, Margriet Kwint, Timothy Solberg, Rene MonshouwerJohan Bussink, Andre Dekker, Philippe Lambin

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review


PurposeMachine learning classification algorithms (classifiers) for prediction of treatment response are becoming more popular in radiotherapy literature. General Machine learning literature provides evidence in favor of some classifier families (random forest, support vector machine, gradient boosting) in terms of classification performance. The purpose of this study is to compare such classifiers specifically for (chemo)radiotherapy datasets and to estimate their average discriminative performance for radiation treatment outcome prediction. MethodsWe collected 12 datasets (3496 patients) from prior studies on post-(chemo)radiotherapy toxicity, survival, or tumor control with clinical, dosimetric, or blood biomarker features from multiple institutions and for different tumor sites, that is, (non-)small-cell lung cancer, head and neck cancer, and meningioma. Six common classification algorithms with built-in feature selection (decision tree, random forest, neural network, support vector machine, elastic net logistic regression, LogitBoost) were applied on each dataset using the popular open-source R package caret. The R code and documentation for the analysis are available online (). All classifiers were run on each dataset in a 100-repeated nested fivefold cross-validation with hyperparameter tuning. Performance metrics (AUC, calibration slope and intercept, accuracy, Cohen's kappa, and Brier score) were computed. We ranked classifiers by AUC to determine which classifier is likely to also perform well in future studies. We simulated the benefit for potential investigators to select a certain classifier for a new dataset based on our study (pre-selection based on other datasets) or estimating the best classifier for a dataset (set-specific selection based on information from the new dataset) compared with uninformed classifier selection (random selection). ResultsRandom forest (best in 6/12 datasets) and elastic net logistic regression (best in 4/12 datasets) showed the overall best discrimination, but there was no single best classifier across datasets. Both classifiers had a median AUC rank of 2. Preselection and set-specific selection yielded a significant average AUC improvement of 0.02 and 0.02 over random selection with an average AUC rank improvement of 0.42 and 0.66, respectively. ConclusionRandom forest and elastic net logistic regression yield higher discriminative performance in (chemo)radiotherapy outcome and toxicity prediction than other studied classifiers. Thus, one of these two classifiers should be the first choice for investigators when building classification models or to benchmark one's own modeling results against. Our results also show that an informed preselection of classifiers based on existing datasets can improve discrimination over random selection.
Original languageEnglish
Pages (from-to)3449-3459
Number of pages11
JournalMedical Physics
Issue number7
Publication statusPublished - 1 Jul 2018


  • classification
  • machine learning
  • outcome prediction
  • predictive modeling
  • radiotherapy
  • Chemoradiotherapy/adverse effects
  • Prognosis
  • Area Under Curve
  • Humans
  • Logistic Models
  • Machine Learning
  • Neoplasms/diagnosis
  • Software
  • Decision Trees
  • Neural Networks (Computer)


Dive into the research topics of 'Machine learning algorithms for outcome prediction in (chemo)radiotherapy: An empirical comparison of classifiers'. Together they form a unique fingerprint.

Cite this