Deep learning for the fully automated segmentation of the inner ear on MRI

A. Vaidyanathan*, M.F.J.A. van der Lubbe, R.T.H. Leijenaar, M. van Hoof, F. Zerka, B. Miraglio, S. Primakov, A.A. Postma, T.D. Bruintjes, M.A.L. Bilderbeek, H. Sebastiaan, P.F.M. Dammeijer, V. van Rompaey, H.C. Woodruff, W. Vos, S. Walsh, R. van de Berg, P. Lambin

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review


Segmentation of anatomical structures is valuable in a variety of tasks, including 3D visualization, surgical planning, and quantitative image analysis. Manual segmentation is time-consuming and deals with intra and inter-observer variability. To develop a deep-learning approach for the fully automated segmentation of the inner ear in MRI, a 3D U-net was trained on 944 MRI scans with manually segmented inner ears as reference standard. The model was validated on an independent, multicentric dataset consisting of 177 MRI scans from three different centers. The model was also evaluated on a clinical validation set containing eight MRI scans with severe changes in the morphology of the labyrinth. The 3D U-net model showed precise Dice Similarity Coefficient scores (mean DSC-0.8790) with a high True Positive Rate (91.5%) and low False Discovery Rate and False Negative Rates (14.8% and 8.49% respectively) across images from three different centers. The model proved to perform well with a DSC of 0.8768 on the clinical validation dataset. The proposed auto-segmentation model is equivalent to human readers and is a reliable, consistent, and efficient method for inner ear segmentation, which can be used in a variety of clinical applications such as surgical planning and quantitative image analysis.
Original languageEnglish
Article number2885
Number of pages14
JournalScientific Reports
Issue number1
Publication statusPublished - 3 Feb 2021




Dive into the research topics of 'Deep learning for the fully automated segmentation of the inner ear on MRI'. Together they form a unique fingerprint.

Cite this