Explainable AI for medical image analysis

Carolina Brás, Helena Montenegro, Leon Y. Cai, Valentina Corbetta, Yuankai Huo, Wilson Silva, Jaime S. Cardoso, Bennett A. Landman, Ivana Išgum

Research output: Chapter in Book/Report/Conference proceedingChapterAcademic

Abstract

Rising adoption of AI-driven solutions in medical imaging is associated with an emerging need to develop strategies to introduce explainability as an important aspect of trustworthiness of AI models. This chapter addresses the most commonly used explainability techniques in medical image analysis, namely methods generating visual, example-based, textual, and concept-based explanations. To obtain visual explanations, we explore backpropagation- and perturbation-based methods. To yield example-based explanations, we focus on prototype-, distance-, and retrieval-based techniques, as well as counterfactual explanations. Finally, to produce textual and concept-based explanations, we delve into image captioning and testing with concept activation vectors, respectively. This chapter aims at providing understanding of the conceptual underpinning, advantages and limitations of each method, as well as to interpret their generated explanations in the context of medical image analysis.
Original languageEnglish
Title of host publicationTrustworthy Ai in Medical Imaging
EditorsMarco Lorenzi, Maria A. Zuluaga
PublisherElsevier
Chapter16
Pages347-366
Number of pages20
ISBN (Electronic)9780443237614
ISBN (Print)9780443237607
DOIs
Publication statusPublished - 2025

Keywords

  • Concept-based explanations
  • Example-based explanations
  • Explainable artificial intelligence
  • In-model explainability
  • Post-hoc explainability
  • Textual explanations
  • Visual explanations

Fingerprint

Dive into the research topics of 'Explainable AI for medical image analysis'. Together they form a unique fingerprint.

Cite this