Criminal behavior and accountability of artificial intelligence systems

Research output: ThesisDoctoral ThesisInternal

211 Downloads (Pure)

Abstract

AI systems have the capacity to act in a way that can generally be considered as ‘criminal’ by society. Yet, it can be argued that they lack (criminal) agency and the feeling of it. In the future, however, humans could develop expectations of norm-conforming behaviour from machines. Criminal law might not be the right answer for AI-related harm, even though holding AI systems directly liable could be useful to a certain extent. This thesis explores the issue of criminal responsibility of AI systems by focusing on whether such a legal framework would be needed and feasible. It aims to understand how to deal with the (apparent) conflict between AI and the most classical notions of criminal law. The occurrence of AI is not the first time that criminal law theory has had to deal with new scientific developments. Nevertheless, the debate on criminal liability of AI systems is somewhat different: it is deeply introspective. In other words, discussing the liability of new artificial agents brings about pioneering perspectives on the liability of human agents. As such, the thesis poses questions that find their answers in one’s own beliefs on what is human and what is not, and, ultimately, on what is right and what is wrong.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Maastricht University
  • University of Florence
Supervisors/Advisors
  • Klip, André, Supervisor
  • Papa, M., Supervisor, External person
Award date24 Nov 2023
Place of PublicationMaastricht
Publisher
Print ISBNs9789047301721
Electronic ISBNs9789400113381
DOIs
Publication statusPublished - 2023

Keywords

  • Artificial intelligence
  • criminal liability
  • new technologies and law

Cite this