An Evolutionary Dynamical Analysis of Multi-Agent Learning in Iterated Games

K.P. Tuyls, P.J. 't Hoen, B. Vanschoenwinkel

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In this paper, we investigate reinforcement learning (rl) in multi-agent systems (mas) from an evolutionary dynamical perspective. Typical for a mas is that the environment is not stationary and the markov property is not valid. This requires agents to be adaptive. Rl is a natural approach to model the learning of individual agents. These learning algorithms are however known to be sensitive to the correct choice of parameter settings for single agent systems. This issue is more prevalent in the mas case due to the changing interactions amongst the agents. It is largely an open question for a developer of mas of how to design the individual agents such that, through learning, the agents as a collective arrive at good solutions. We will show that modeling rl in mas, by taking an evolutionary game theoretic point of view, is a new and potentially successful way to guide learning agents to the most suitable solution for their task at hand. We show how evolutionary dynamics (ed) from evolutionary game theory can help the developer of a mas in good choices of parameter settings of the used rl algorithms. The ed essentially predict the equilibriums outcomes of the mas where the agents use individual rl algorithms. More specifically, we show how the ed predict the learning trajectories of q-learners for iterated games. Moreover, we apply our results to (an extension of) the collective intelligence framework (coin). Coin is a proved engineering approach for learning of cooperative tasks in mass. The utilities of the agents are re-engineered to contribute to the global utility. We show how the improved results for mas rl in coin, and a developed extension, are predicted by the ed.
Original languageEnglish
Pages (from-to)115-153
JournalAutonomous Agents and Multi-agent Systems
Volume12(1)
DOIs
Publication statusPublished - 1 Jan 2006

Cite this