Malpractice by the Autonomous AI Physician

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

AI is currently capable of making autonomous medical decisions, like diagnosis and prognosis, without the input of humans. Liability for this “practice of medicine” by an Autonomous AI Physician currently falls in a tort law gap when it cannot be sufficiently connected to humans involved with the AI because neither human-centric nor product-centric causes of action provide a mechanism for recovery. To fill this liability gap, this Article proposes a framework that governs liability under existing tort law by focusing on control of the AI’s injury-causing output to assign liability to creators, organizations, individual providers, and the Autonomous AI Physician with limited legal personhood. Other scholars have suggested bridging this gap by either assigning all tort liability to humans or circumventing tort law altogether. These approaches either subvert tort law’s primary goals, require significant structural change, or offer only a piecemeal solution to the problem. The control framework laid out in this Article provides a functional and comprehensive solution for governing injuries caused by the Autonomous AI Physician that both balances the benefits and risks of technological innovation in healthcare and advances tort law’s compensation and deterrence goals.
Original languageEnglish
JournalThe Illinois Journal of Law, Technology & Policy
Publication statusPublished - 2023
Externally publishedYes

Cite this