Human-inspired computational fairness

Steven de Jong, Karl Tuyls

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In many common tasks for multi-agent systems, assuming individually rational agents leads to inferior solutions. Numerous researchers found that fairness needs to be considered in addition to individual reward, and proposed valuable computational models of fairness. In this paper, we argue that there are two opportunities for improvement. First, existing models are not specifically tailored to addressing a class of tasks named social dilemmas, even though such tasks are quite common in the context of multi-agent systems. Second, the models generally rely on the assumption that all agents will and can adhere to these models, which is not always the case. We therefore present a novel computational model, i.e., human-inspired computational fairness. Upon being confronted with social dilemmas, humans may apply a number of fully decentralized sanctioning mechanisms to ensure that optimal, fair solutions emerge, even though some participants may be deciding purely on the basis of individual reward. In this paper, we show how these human mechanisms may be computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.
Original languageEnglish
Pages (from-to)103-126
Number of pages24
JournalAutonomous Agents and Multi-agent Systems
Volume22
Issue number1
DOIs
Publication statusPublished - 2011

Cite this

Jong, Steven de ; Tuyls, Karl. / Human-inspired computational fairness. In: Autonomous Agents and Multi-agent Systems. 2011 ; Vol. 22, No. 1. pp. 103-126.
@article{e2037168795e4716b2f03b2f9b31439a,
title = "Human-inspired computational fairness",
abstract = "In many common tasks for multi-agent systems, assuming individually rational agents leads to inferior solutions. Numerous researchers found that fairness needs to be considered in addition to individual reward, and proposed valuable computational models of fairness. In this paper, we argue that there are two opportunities for improvement. First, existing models are not specifically tailored to addressing a class of tasks named social dilemmas, even though such tasks are quite common in the context of multi-agent systems. Second, the models generally rely on the assumption that all agents will and can adhere to these models, which is not always the case. We therefore present a novel computational model, i.e., human-inspired computational fairness. Upon being confronted with social dilemmas, humans may apply a number of fully decentralized sanctioning mechanisms to ensure that optimal, fair solutions emerge, even though some participants may be deciding purely on the basis of individual reward. In this paper, we show how these human mechanisms may be computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.",
author = "Jong, {Steven de} and Karl Tuyls",
year = "2011",
doi = "10.1007/s10458-010-9122-9",
language = "English",
volume = "22",
pages = "103--126",
journal = "Autonomous Agents and Multi-agent Systems",
issn = "1387-2532",
publisher = "Springer",
number = "1",

}

Human-inspired computational fairness. / Jong, Steven de; Tuyls, Karl.

In: Autonomous Agents and Multi-agent Systems, Vol. 22, No. 1, 2011, p. 103-126.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Human-inspired computational fairness

AU - Jong, Steven de

AU - Tuyls, Karl

PY - 2011

Y1 - 2011

N2 - In many common tasks for multi-agent systems, assuming individually rational agents leads to inferior solutions. Numerous researchers found that fairness needs to be considered in addition to individual reward, and proposed valuable computational models of fairness. In this paper, we argue that there are two opportunities for improvement. First, existing models are not specifically tailored to addressing a class of tasks named social dilemmas, even though such tasks are quite common in the context of multi-agent systems. Second, the models generally rely on the assumption that all agents will and can adhere to these models, which is not always the case. We therefore present a novel computational model, i.e., human-inspired computational fairness. Upon being confronted with social dilemmas, humans may apply a number of fully decentralized sanctioning mechanisms to ensure that optimal, fair solutions emerge, even though some participants may be deciding purely on the basis of individual reward. In this paper, we show how these human mechanisms may be computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.

AB - In many common tasks for multi-agent systems, assuming individually rational agents leads to inferior solutions. Numerous researchers found that fairness needs to be considered in addition to individual reward, and proposed valuable computational models of fairness. In this paper, we argue that there are two opportunities for improvement. First, existing models are not specifically tailored to addressing a class of tasks named social dilemmas, even though such tasks are quite common in the context of multi-agent systems. Second, the models generally rely on the assumption that all agents will and can adhere to these models, which is not always the case. We therefore present a novel computational model, i.e., human-inspired computational fairness. Upon being confronted with social dilemmas, humans may apply a number of fully decentralized sanctioning mechanisms to ensure that optimal, fair solutions emerge, even though some participants may be deciding purely on the basis of individual reward. In this paper, we show how these human mechanisms may be computationally modelled, such that fair and optimal solutions emerge from agents being confronted with social dilemmas.

U2 - 10.1007/s10458-010-9122-9

DO - 10.1007/s10458-010-9122-9

M3 - Article

VL - 22

SP - 103

EP - 126

JO - Autonomous Agents and Multi-agent Systems

JF - Autonomous Agents and Multi-agent Systems

SN - 1387-2532

IS - 1

ER -