On the legal responsibility of artificially intelligent agents: Addressing three misconceptions

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

This paper tackles three misconceptions regarding discussions of the legal responsibility of artificially intelligent entities: these are that they
(a) cannot be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.
(b) should not be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.
(c) should not be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible.
(a) is a misconception not only because (positive) law is a social construct, but also because there is no such thing as ‘real’ agency. The latter is also the reason why (b) is misconceived. The arguments against misconceptions a and b imply that legal responsibility can be constructed in different ways, including those that hold both artificially intelligent and other (human or corporate) agents responsible (misconception c). Accordingly, this paper concludes that there is more flexibility in the construction of responsibility of artificially intelligent entities than is at times assumed. This offers more freedom to law- and policymakers, but also requires openness, creativity, and a clear normative vision of the aims they want to achieve.
Original languageEnglish
Pages (from-to)35-43
Number of pages9
JournalTechnology and Regulation
Volume2021
Publication statusPublished - 12 Jul 2021

Cite this