Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations

Federico Maria Cau, Hanna Hauptmann, Lucio Davide Spano, Nava Tintarev

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

17 Downloads (Pure)

Abstract

A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI - rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users' reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI's prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user's task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user's decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data.
Original languageEnglish
Title of host publicationIUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces
PublisherAssociation for Computing Machinery
Pages251-263
Number of pages13
ISBN (Electronic)9798400701061
DOIs
Publication statusPublished - 27 Mar 2023
Event28th Annual Conference on Intelligent User Interfaces - University of Technology, Sydney, Australia
Duration: 27 Mar 202331 Mar 2023
https://iui.acm.org/2023/

Conference

Conference28th Annual Conference on Intelligent User Interfaces
Abbreviated titleIUI 2023
Country/TerritoryAustralia
CitySydney
Period27/03/2331/03/23
Internet address

Keywords

  • Abductive
  • AI confidence
  • Deductive
  • Inductive
  • Logical reasoning
  • Random forest
  • Stock market prediction
  • XAI

Cite this