Human-inspired computational fairness

Steven de Jong*, Karl Tuyls

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In many common tasks for multi-agent systems, assuming individually rational agents leads to inferior solutions. Numerous researchers found that fairness needs to be considered in addition to individual reward, and proposed valuable computational models of fairness. In this paper, we argue that there are two opportunities for improvement. First, existing models are not specifically tailored to addressing a class of tasks named social dilemmas, even though such tasks are quite common in the context of multi-agent systems. Second, the models generally rely on the assumption that all agents will and can adhere to these models, which is not always the case. We therefore present a novel computational model, i.e., human-inspired computational fairness. Upon being confronted with social dilemmas, humans may apply a number of fully decentralized sanctioning mechanisms to ensure that optimal, fair solutions emerge, even though some participants may be deciding purely on the basis of individual reward. In this paper, we show how these human mechanisms may be computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.
Original languageEnglish
Pages (from-to)103-126
Number of pages24
JournalAutonomous Agents and Multi-agent Systems
Volume22
Issue number1
DOIs
Publication statusPublished - 2011

Cite this