Decoding US Tort Liability in Healthcare’s Black-Box Era: Lessons From the EU

Mindy Duffourc, Sara Gerke

Research output: Contribution to journalArticleAcademic

Abstract

The rapid development of sophisticated artificial intelligence (AI) tools in healthcare presents new possibilities for improving medical treatment and general health. Currently, such AI tools can perform a wide range of health-related tasks, from specialized autonomous systems that diagnose diabetic retinopathy to general-use generative models like ChatGPT that answer users’ health-related questions. On the other hand, significant liability concerns arise as medical professionals and consumers increasingly turn to AI for health information. This is particularly true for black-box AI because while potentially enhancing the AI’s capability and accuracy, these systems also operate without transparency, making it difficult or even impossible to understand how they arrive at a particular result.

The current liability framework is not fully equipped to address the unique challenges posed by black-box AI’s lack of transparency, leaving patients, consumers, healthcare providers, AI manufacturers, and policymakers unsure about who will be responsible for AI-caused medical injuries. Of course, the United States (US) is not the only jurisdiction faced with a liability framework that is out-of-tune with the current realities of black-box AI technology in the health domain. The European Union (EU) has also been grappling with the challenges that black-box AI poses to traditional liability frameworks and recently proposed new liability Directives to overcome some of these challenges.

As the first article to analyze the liability frameworks governing medical injuries caused by black-box AI in both the US and EU, we demystify the structure and relevance of foreign law in this area to provide practical guidance to courts, litigators, and other stakeholders seeking to understand the application and limitations of current and newly proposed liability law in this domain. We reveal that remarkably similar principles will operate to govern liability for medical injuries caused by black-box AI and that, as a result, both jurisdictions face similar liability challenges. These similarities offer an opportunity for the US to learn from the EU’s newly developed approach to governing liability for AI-caused injuries. In particular, we identify four valuable lessons from the EU’s approach: (1) a broad approach to AI liability fails to provide solutions to some challenges posed by black-box AI in healthcare; (2) traditional concepts of human fault pose significant challenges in cases involving black-box AI; (3) product liability frameworks must consider the unique features of black-box AI; and (4) evidentiary rules should address the difficulties that claimants will face in cases involving medical injuries caused by black-box AI.
Original languageEnglish
Article number1
Pages (from-to)1-70
Number of pages70
JournalStanford Technology Law Review
Volume27
Issue number1
Publication statusPublished - 6 Feb 2024

Cite this