Effects of AI and Logic-Style Explanations on Users’ Decisions Under Different Levels of Uncertainty

Federico Maria Cau*, Hanna Hauptmann, Lucio Davide Spano, Nava Tintarev

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Downloads (Pure)

Abstract

Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, although previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This article addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles for classification tasks. We conducted two separate studies: one requesting participants to recognize handwritten digits and one to classify the sentiment of reviews. To assess the decision making, we analyzed the task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design) according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analyzed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to overreliance on the AI advice in both domains—it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.
Original languageEnglish
Article number22
Number of pages42
JournalTransactions on Interactive Intelligent Systems
Volume13
Issue number4
DOIs
Publication statusPublished - 8 Dec 2023

Keywords

  • AI correctness
  • AI uncertainty
  • CNNs
  • Explainable AI
  • explanations
  • intelligent user interfaces
  • logical reasoning
  • MNIST
  • neural networks
  • user uncertainty
  • Yelp Reviews

Cite this