Can large language models apply the law?

Henrique Marcos*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

77 Downloads (Pure)

Abstract

This paper asks whether large language models (LLMs) can apply the law. It does not question whether LLMs should apply the law. Instead, it distinguishes between two interpretations of the ‘can’ question. One, can LLMs apply the law like ordinary individuals? Two, can LLMs apply the law in the same manner as judges? The study examines D’Almeida’s theory of law application, divided into inferential and pragmatic law application. It argues that his account of pragmatic law application can be improved as it does not fully consider that law application (and rule-following) is a shared, public practice collectively realized by members of a linguistic community. The study concludes that LLMs cannot apply the law. They cannot apply the law in the inferential sense as they have mere syntactic (not semantic) interaction with the law. They cannot apply the law in the pragmatic sense as pragmatic law application does not depend on a single agent, whether that agent is a judge, an ordinary citizen, or a non-human entity.
Original languageEnglish
Pages (from-to)3605-3614
Number of pages10
JournalAI and Society
Volume40
Issue number5
DOIs
Publication statusPublished - 28 Oct 2024

Keywords

  • artificial intelligence
  • large language models
  • law application
  • rule application
  • legal interpretation
  • linguistic communities

Fingerprint

Dive into the research topics of 'Can large language models apply the law?'. Together they form a unique fingerprint.

Cite this