Unsupervised Interpretable Basis Extraction for Concept – Based Visual Explanations

Alexandros Doumanoglou, Stylianos Asteriadis, Dimitrios Zarpalas

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

An important line of research attempts to explain CNN image classifier predictions and intermediate layer representations in terms of human understandable concepts. In this work, we expand on previous works in the literature that use annotated concept datasets to extract interpretable feature space directions and propose an unsupervised post-hoc method to extract a disentangling interpretable basis by looking for the rotation of the feature space that explains sparse one-hot thresholded transformed representations of pixel activations. We do experimentation with existing popular CNNs and demonstrate the effectiveness of our method in extracting an interpretable basis across network architectures and training datasets. We make extensions to the existing basis interpretability metrics found in the literature and show that, intermediate layer representations become more interpretable when transformed to the bases extracted with our method. Finally, using the basis interpretability metrics, we compare the bases extracted with our method with the bases derived with a supervised approach and find that, in one aspect, the proposed unsupervised approach has a strength that constitutes a limitation of the supervised one and give potential directions for future research.
Original languageEnglish
JournalIEEE Transactions on Artificial Intelligence
Issue number4
DOIs
Publication statusE-pub ahead of print - 1 Jan 2023

Keywords

  • Annotations
  • Artificial intelligence
  • Detectors
  • Explainable Artificial Intelligence (XAI)
  • Feature extraction
  • Interpretable Artificial Intelligence (IAI)
  • Interpretable Basis
  • Measurement
  • Semantics
  • Training
  • Unsupervised Learning

Cite this