Abstract
To enhance the radiotherapy workflow, many artificial intelligence (AI) applications have been proposed. To date, only a limited number of the proposed AI applications have been implemented into clinical practice. Lack of trust is often mentioned as the limiting factor due to the inherent black-box characteristics of AI. Explainable AI (xAI) methods are being introduced as tool to alleviate the lack of trust in these non-transparent systems. To study the effect that xAI has on clinicians' trust, a survey was developed and distributed. Preliminary findings conclude that clinicians do not necessarily mistrust AI, yet, they seem to find transparency important. xAI could serve as a shared mental model (SMM) between the clinician and AI to maximize human-AI collaboration. Future work will look at the role that xAI plays in SMMs and how xAI must be designed to fully exploit AI for radiotherapy whilst remaining safe and ethical.
| Original language | English |
|---|---|
| Pages (from-to) | 217-224 |
| Number of pages | 8 |
| Journal | CEUR Workshop Proceedings |
| Volume | 3554 |
| Publication status | Published - 1 Jan 2023 |
| Event | Joint 1st World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium, xAI-2023: LB-D-DC - Lisbon, Portugal Duration: 26 Jul 2023 → 28 Jul 2023 https://xaiworldconference.com/2023/ |
Keywords
- Healthcare
- Implementation
- Radiotherapy
- Shared Mental Models
- Trust
- xAI design