Considerations for applying logical reasoning to explain neural network outputs

Federico Maria Cau, Lucio Davide Spano, Nava Tintarev

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining artificial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques. We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the effectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types we are able to identify considerations needed to support these kinds of transformations.
Original languageEnglish
Title of host publicationProceedings of the Italian Workshop on Explainable Artificial Intelligence
Pages96-103
Number of pages8
Volume2742
Publication statusPublished - 1 Jan 2020
Event2020 Italian Workshop on Explainable Artificial Intelligence - Online, Torino, Italy
Duration: 25 Nov 202026 Nov 2020

Publication series

SeriesCEUR Workshop Proceedings
ISSN1613-0073

Workshop

Workshop2020 Italian Workshop on Explainable Artificial Intelligence
Abbreviated titleXAI.it 2020
Country/TerritoryItaly
CityTorino
Period25/11/2026/11/20

Keywords

  • Explainable User Interfaces
  • Reasoning
  • XAI

Cite this