Abstract
An important line of research attempts to explain convolutional neural network (CNN) image classifier predictions and intermediate layer representations in terms of human understandable concepts. In this work, we expand on previous works in the literature that use annotated concept datasets to extract interpretable feature space directions and propose an unsupervised post-hoc method to extract a disentangling interpretable basis by looking for the rotation of the feature space that explains sparse one-hot thresholded transformed representations of pixel activations. We do experimentation with existing popular CNNs and demonstrate the effectiveness of our method in extracting an interpretable basis across network architectures and training datasets. We make extensions to the existing basis interpretability metrics found in the literature and show that intermediate layer representations become more interpretable when transformed to the bases extracted with our method. Finally, using the basis interpretability metrics, we compare the bases extracted with our method with the bases derived with a supervised approach and find that, in one aspect, the proposed unsupervised approach has a strength that constitutes a limitation of the supervised one and give potential directions for future research.
Original language | English |
---|---|
Pages (from-to) | 1496-1510 |
Journal | IEEE Transactions on Artificial Intelligence |
Volume | 5 |
Issue number | 4 |
Early online date | 1 Jan 2023 |
DOIs | |
Publication status | Published - 1 Apr 2024 |
Keywords
- Annotations
- Artificial intelligence
- Detectors
- Explainable Artificial Intelligence (XAI)
- Feature extraction
- Interpretable Artificial Intelligence (IAI)
- Interpretable Basis
- Measurement
- Semantics
- Training
- Unsupervised Learning