Accelerating Implementation of Artificial Intelligence in Radiotherapy through Explainability

Luca Heising*

*Corresponding author for this work

Research output: Contribution to journalConference article in journalAcademic

Abstract

To enhance the radiotherapy workflow, many artificial intelligence (AI) applications have been proposed. To date, only a limited number of the proposed AI applications have been implemented into clinical practice. Lack of trust is often mentioned as the limiting factor due to the inherent black-box characteristics of AI. Explainable AI (xAI) methods are being introduced as tool to alleviate the lack of trust in these non-transparent systems. To study the effect that xAI has on clinicians' trust, a survey was developed and distributed. Preliminary findings conclude that clinicians do not necessarily mistrust AI, yet, they seem to find transparency important. xAI could serve as a shared mental model (SMM) between the clinician and AI to maximize human-AI collaboration. Future work will look at the role that xAI plays in SMMs and how xAI must be designed to fully exploit AI for radiotherapy whilst remaining safe and ethical.
Original languageEnglish
Pages (from-to)217-224
Number of pages8
JournalCEUR Workshop Proceedings
Volume3554
Publication statusPublished - 1 Jan 2023
EventJoint 1st World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium, xAI-2023: LB-D-DC - Lisbon, Portugal
Duration: 26 Jul 202328 Jul 2023
https://xaiworldconference.com/2023/

Keywords

  • Healthcare
  • Implementation
  • Radiotherapy
  • Shared Mental Models
  • Trust
  • xAI design

Fingerprint

Dive into the research topics of 'Accelerating Implementation of Artificial Intelligence in Radiotherapy through Explainability'. Together they form a unique fingerprint.

Cite this