Improved reinforcement learning with curriculum

Joseph West*, Frederic Maire, Cameron Browne, Simon Denman

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Humans tend to learn complex abstract concepts faster if examples are presented in a structured manner. For instance, when learning how to play a board game, usually one of the first concepts learned is how the game ends, i.e. the actions that lead to a terminal state (win, lose or draw). The advantage of learning endgames first is that once the actions leading to a terminal state are understood, it becomes possible to incrementally learn the consequences of actions that are further away from a terminal state - we call this an end-game-first curriculum. The state-of-the-art machine learning player for general board games, AlphaZero by Google DeepMind, does not employ a structured training curriculum. Whilst Deepmind's approach is effective, their method for generating experiences by self-play is resource intensive, costing literally millions of dollars in computational resources. We have developed a new method called the endgame-first training curriculum, which, when applied to the self-play/experience-generati on loop, reduces the required computational resources to achieve the same level of learning. Our approach improves performance by not generating experiences which are expected to be of low training value. The end-gamefirst curriculum enables significant savings in processing resources and is potentially applicable to other problems that can be framed in terms of a game. (c) 2020 Elsevier Ltd. All rights reserved.

Original languageEnglish
Article number113515
Number of pages15
JournalExpert Systems with Applications
Volume158
DOIs
Publication statusPublished - 15 Nov 2020

Keywords

  • Curriculum learning
  • Reinforcement learning
  • Monte Carlo tree search
  • General game playing
  • NEURAL-NETWORKS
  • GAME
  • GO

Cite this