Abstract
Graph Neural Networks (GNNs) are powerful tools for graph-related tasks, excelling in progressing graph-structured data while maintaining permutation invariance. However, their challenge lies in the obscurity of new node representations, hindering interpretability. This paper introduces a framework addressing this limitation by explaining GNN predictions. The proposed method takes any GNN prediction, for which it returns a concise subgraph as explanation. Utilizing Saliency Maps, an attribution gradient-based technique, we enhance interpretability by assigning importance scores to entities withing the knowledge graph via backpropagation. Evaluated on the Drug Repurposing Knowledge Graph, Graph Attention Network achieved a Hits@5 score of 0.451 and a Hits@10 score of 0.672. GraphSAGE demonstrated notable results with the highest recall rate of 0.992. Our framework underscores GNN efficacy and interpretability, which is crucial in complex scenarios like drug repurposing. Illustrated through an Alzheimer’s disease case study, our approach provides meaningful and comprehensible explanations for GNN predictions. This work contributes to advancing the transparency and utility of GNNs in real-world applications.
Original language | English |
---|---|
Pages (from-to) | 46-55 |
Number of pages | 10 |
Journal | CEUR Workshop Proceedings |
Volume | 3890 |
Publication status | Published - 2024 |
Event | 15th International Conference on Semantic Web Applications and Tools for Health Care and Life Sciences, SWAT4HCLS 2024 - Hybrid, Leiden, Netherlands Duration: 26 Feb 2024 → 29 Feb 2024 https://www.swat4ls.org/workshops/leiden2024/ |
Keywords
- Alzheimer’s Disease
- Drug Repurposing
- Explainable AI (XAI)
- Graph Neural Networks (GNNs)
- Knowledge Graphs (KGs)
- Saliency Maps