Learning with Whom to Communicate Using Relational Reinforcement Learning

Marc J. V. Ponsen, Tom Croonenborghs, Karl Tuyls, Jan Ramon, Kurt Driessens, H. Jaap van den Herik, Eric O. Postma

Research output: Chapter in Book/Report/Conference proceedingChapterAcademic

Abstract

Relational reinforcement learning is a promising direction within reinforcement learning research. It upgrades reinforcement learning techniques using relational representations for states, actions, and learned value functions or policies to allow natural representations and abstractions of complex tasks. Multi-agent systems are characterized by their relational structure and present a good example of a complex task. In this article, we show how relational reinforcement learning could be a useful tool for learning in multi-agent systems. We study this approach in more detail on one important aspect of multi-agent systems, i.e., on learning a communication policy for cooperative systems (e.g., resource distribution). Communication between agents in realistic multi-agent systems can be assumed costly, limited, and unreliable. We perform a number of experiments that highlight the conditions in which relational representations can be beneficial when taking the constraints mentioned above into account.keywordsoptimal policyreinforcement learningaction spacemultiagent systemmarkov decision processthese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Original languageEnglish
Title of host publicationInteractive Collaborative Information Systems
EditorsRobert Babuska, Frans C. A. Groen
PublisherSpringer
Pages45-63
Number of pages19
DOIs
Publication statusPublished - 2010

Publication series

SeriesStudies in Computational Intelligence

Cite this