TY - JOUR
T1 - The Epistemic Cost of Opacity
T2 - How the Use of Artificial Intelligence Undermines the Knowledge of Medical Doctors in High-Stakes Contexts
AU - Schmidt, Eva
AU - Putora, Paul Martin
AU - Fijten, Rianne
N1 - Funding Information:
This paper was supported by the Volkswagen Foundation, as part of the project Explainable Intelligent Systems (EIS), project number AZ 98510, and by the Federal Ministry of Education and Research of Germany and the state of North Rhine-Westphalia as part of the Lamarr Institute for Machine Learning and Artificial Intelligence, Dortmund, Germany.
Publisher Copyright:
© The Author(s) 2024.
PY - 2025/1/13
Y1 - 2025/1/13
N2 - Artificial intelligent (AI) systems used in medicine are often very reliable and accurate, but at the price of their being increasingly opaque. This raises the question whether a system’s opacity undermines the ability of medical doctors to acquire knowledge on the basis of its outputs. We investigate this question by focusing on a case in which a patient’s risk of recurring breast cancer is predicted by an opaque AI system. We argue that, given the system’s opacity, as well as the possibility of malfunctioning AI systems, practitioners’ inability to check the correctness of their outputs, and the high stakes of such cases, the knowledge of medical practitioners is indeed undermined. They are lucky to form true beliefs based on the AI systems’ outputs, and knowledge is incompatible with luck. We supplement this claim with a specific version of the safety condition on knowledge, Safety*. We argue that, relative to the perspective of the medical doctor in our example case, his relevant beliefs could easily be false, and this despite his evidence that the AI system functions reliably. Assuming that Safety* is necessary for knowledge, the practitioner therefore doesn’t know. We address three objections to our proposal before turning to practical suggestions for improving the epistemic situation of medical doctors.
AB - Artificial intelligent (AI) systems used in medicine are often very reliable and accurate, but at the price of their being increasingly opaque. This raises the question whether a system’s opacity undermines the ability of medical doctors to acquire knowledge on the basis of its outputs. We investigate this question by focusing on a case in which a patient’s risk of recurring breast cancer is predicted by an opaque AI system. We argue that, given the system’s opacity, as well as the possibility of malfunctioning AI systems, practitioners’ inability to check the correctness of their outputs, and the high stakes of such cases, the knowledge of medical practitioners is indeed undermined. They are lucky to form true beliefs based on the AI systems’ outputs, and knowledge is incompatible with luck. We supplement this claim with a specific version of the safety condition on knowledge, Safety*. We argue that, relative to the perspective of the medical doctor in our example case, his relevant beliefs could easily be false, and this despite his evidence that the AI system functions reliably. Assuming that Safety* is necessary for knowledge, the practitioner therefore doesn’t know. We address three objections to our proposal before turning to practical suggestions for improving the epistemic situation of medical doctors.
KW - Artificial Intelligence
KW - Black-box AI
KW - Explainable AI
KW - Healthcare
KW - Medical AI
KW - Safety Condition
U2 - 10.1007/s13347-024-00834-9
DO - 10.1007/s13347-024-00834-9
M3 - Article
SN - 2210-5433
VL - 38
JO - Philosophy & Technology
JF - Philosophy & Technology
IS - 1
M1 - 5
ER -