Quantitative Comparison of Monte-Carlo Dropout Uncertainty Measures for Multi-class Segmentation

Robin Camarasa, Daniel Bos, Jeroen Hendrikse, Paul Nederkoorn, Eline Kooi, Aad Van Der Lugt, Marleen De Bruijne

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

132 Downloads (Pure)

Abstract

Over the past decade, deep learning has become the gold standard for automatic medical image segmentation. Every segmentation task has an underlying uncertainty due to image resolution, annotation protocol, etc. Therefore, a number of methods and metrics have been proposed to quantify the uncertainty of neural networks mostly based on Bayesian deep learning, ensemble learning methods or output probability calibration. The aim of our research is to assess how reliable the different uncertainty metrics found in the literature are. We propose a quantitative and statistical comparison of uncertainty measures based on the relevance of the uncertainty map to predict misclassification. Four uncertainty metrics were compared over a set of 144 models. The application studied is the segmentation of the lumen and vessel wall of carotid arteries based on multiple sequences of magnetic resonance (MR) images in multi-center data.
Original languageEnglish
Title of host publicationUncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis. UNSURE 2020, GRAIL 2020
EditorsC.H. Sudre, Hamid Fehri, Tal Arbel, Christian F. Baumgartner, Adrian Dalca, Ryutaro Tanno, Koen van Leemput, William M. Wells, Aristeidis Sotiras, Bartlomiej Papiez, Enzo Ferrante, Sarah Parisot
PublisherSpringer, Cham
Pages32-41
Number of pages10
ISBN (Electronic)978-3-030-60365-6
ISBN (Print)978-3-030-60364-9
DOIs
Publication statusPublished - 5 Oct 2020

Publication series

SeriesLecture Notes in Computer Science
Volume12443
ISSN0302-9743

Cite this