Abstract
This paper asks whether large language models (LLMs) can apply the law. It does not question whether LLMs should apply the law. Instead, it distinguishes between two interpretations of the ‘can’ question. One, can LLMs apply the law like ordinary individuals? Two, can LLMs apply the law in the same manner as judges? The study examines D’Almeida’s theory of law application, divided into inferential and pragmatic law application. It argues that his account of pragmatic law application can be improved as it does not fully consider that law application (and rule-following) is a shared, public practice collectively realized by members of a linguistic community. The study concludes that LLMs cannot apply the law. They cannot apply the law in the inferential sense as they have mere syntactic (not semantic) interaction with the law. They cannot apply the law in the pragmatic sense as pragmatic law application does not depend on a single agent, whether that agent is a judge, an ordinary citizen, or a non-human entity.
Original language | English |
---|---|
Pages (from-to) | 3605-3614 |
Number of pages | 10 |
Journal | AI and Society |
Volume | 40 |
Issue number | 5 |
DOIs | |
Publication status | Published - 28 Oct 2024 |
Keywords
- artificial intelligence
- large language models
- law application
- rule application
- legal interpretation
- linguistic communities