Abstract
Rising adoption of AI-driven solutions in medical imaging is associated with an emerging need to develop strategies to introduce explainability as an important aspect of trustworthiness of AI models. This chapter addresses the most commonly used explainability techniques in medical image analysis, namely methods generating visual, example-based, textual, and concept-based explanations. To obtain visual explanations, we explore backpropagation- and perturbation-based methods. To yield example-based explanations, we focus on prototype-, distance-, and retrieval-based techniques, as well as counterfactual explanations. Finally, to produce textual and concept-based explanations, we delve into image captioning and testing with concept activation vectors, respectively. This chapter aims at providing understanding of the conceptual underpinning, advantages and limitations of each method, as well as to interpret their generated explanations in the context of medical image analysis.
Original language | English |
---|---|
Title of host publication | Trustworthy Ai in Medical Imaging |
Editors | Marco Lorenzi, Maria A. Zuluaga |
Publisher | Elsevier |
Chapter | 16 |
Pages | 347-366 |
Number of pages | 20 |
ISBN (Electronic) | 9780443237614 |
ISBN (Print) | 9780443237607 |
DOIs | |
Publication status | Published - 2025 |
Keywords
- Concept-based explanations
- Example-based explanations
- Explainable artificial intelligence
- In-model explainability
- Post-hoc explainability
- Textual explanations
- Visual explanations