Liability Rules for AI-Related Harm: Law and Economics Lessons for a European Approach

Shu Li*, M.G. Faure, Katri Havu

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1009 Downloads (Pure)

Abstract

The potential of artificial intelligence (AI) has grown exponentially in recent years, which not only generates value but also creates risks. AI systems are characterised by their complexity, opacity and autonomy in operation. Now and in the foreseeable future, AI systems will be operating in a manner that is not fully autonomous. This signifies that providing appropriate incentives to the human parties involved is still of great importance in reducing AI-related harm. Therefore, liability rules should be adapted in such a way to provide the relevant parties with incentives to efficiently reduce the social costs of potential accidents. Relying on a law and economics approach, we address the theoretical question of what kind of liability rules should be applied to different parties along the value chain related to AI. In addition, we critically analyse the ongoing policy debates in the European Union, discussing the risk that European policymakers will fail to determine efficient liability rules with regard to different stakeholders.
Original languageEnglish
Article numberPII S1867299X22000265
Pages (from-to)618-634
Number of pages17
JournalEuropean Journal of Risk Regulation
Volume13
Issue number4
Early online date16 Sept 2022
DOIs
Publication statusPublished - Dec 2022

Keywords

  • AI-related harm
  • artificial intelligence
  • deterrence
  • developers
  • law and economics
  • liability rules
  • operators
  • risk-bearing

Cite this