Abstract
This thesis explores the use of artificial intelligence (AI) in medical image analysis to address challenges in clinical diagnostics, treatment prediction, and disease prognosis. The work emphasizes the importance of explainability and uncertainty estimation in AI models to ensure transparency and reliability in medical applications. It introduces reliable segmentation and detection tools for various medical conditions, such as head and neck cancer, carotid artery disease, and renal cysts. Additionally, diagnostic and predictive tools were developed for idiopathic pulmonary fibrosis, head and neck cancer survival, and post-hepatectomy liver failure. Novel uncertainty estimation methods were integrated into deep neural networks, improving post-processing, performance, and quality control. The work also explores explainability approaches in both handcrafted radiomics and deep learning, introducing new methods like counterfactual explanations. This thesis proposes a new framework for the methodological evaluation of explanations for AI tools in medical image analysis. It also proposes a new standard for benchmarking radiomics research to improve the clinical translation of radiomics.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 9 Sept 2024 |
Place of Publication | Maastricht |
Publisher | |
Print ISBNs | 9789465100982 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- Radiomics
- Explainability
- Uncertainty Estimation
- Clinical Utility of Artificial Intelligence