Reinforcement Learning Transfer Using a Sparse Coded Inter-Task Mapping

Haitham Bou Ammar, Matthew E. Taylor, Karl Tuyls, Gerhard Weiss

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

Reinforcement learning agents can successfully learn in a variety of difficult tasks. A fundamental problem is that they may learn slowly in complex environments, inspiring the development of speedup methods such as transfer learning. Transfer improves learning by reusing learned behaviors in similar tasks, usually via an inter-task mapping, which defines how a pair of tasks are related. This paper proposes a novel transfer learning technique to autonomously construct an inter-task mapping by using a novel combinations of sparse coding, sparse projection learning, and sparse pseudo-input gaussian processes. Experiments show successful transfer of information between two very different domains: the mountain car and the pole swing-up task. This paper empirically shows that the learned inter-task mapping can be used to successfully (1) improve the performance of a learned policy on a fixed number of samples, (2) reduce the learning times needed by the algorithms to converge to a policy on a fixed number of samples, and (3) converge faster to a near-optimal policy given a large amount of samples.keywordsreinforcement learnaction spaceconvergence timesparse codereward functionthese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Original languageEnglish
Title of host publicationMulti-Agent Systems
Subtitle of host publicationEUMAS 2011
EditorsM. Cossentino, M. Kaisers, K. Tuyls, G. Weiss
PublisherSpringer, Berlin, Heidelberg
Pages1-16
ISBN (Electronic)978-3-642-34799-3
ISBN (Print)978-3-642-34798-6
DOIs
Publication statusPublished - 2012

Publication series

SeriesLecture Notes in Computer Science
Volume7541
ISSN0302-9743

Cite this