This article argues that it may be useful to sometimes hold autonomous agents, and not only their users, responsible for their acts. In this connection autonomous systems can be computer programs that interact with the outside world without human interference, including ‘intelligent’ weapons and self-driving cars. The argument is based on an analogy between human beings and autonomous agents and its main element is that if humans can be held responsible, so can, in principle, autonomous agents. This argument can only be convincing if the relevant similarities between human beings and autonomous agents are more important than the relevant differences. An important part of the argument is therefore aimed at showing precisely this. The main point here is that the argument does not claim that autonomous agents are actually like human beings, but rather that human beings are actually like autonomous agents. This analogy can only lead to the conclusion that autonomous agents can be held responsible if it is assumed that human beings can be held responsible, even if they – as the argument assumes – are like autonomous agents. This will be argued indeed, and leads to the transition from the question whether human beings and autonomous agents can be held responsible and liable to the question whether it is desirable to do so. The answer to this last question is guardedly affirmative: it depends on the circumstances, but yes, sometimes it is desirable to hold human beings and autonomous agents responsible and liable for what they did. Therefore it sometimes makes sense to do so.
|Title of host publication||Waves in contract and liability law in three decades of ius commune|
|Editors||A. Keirse, M. Loos|
|Place of Publication||Cambridge-Antwerp-Portland|
|Number of pages||24|
|Publication status||Published - Dec 2017|
|Series||Ius Commune Europaeum|