Comparison of Rapid Action Value Estimation Variants for General Game Playing

Chiara F. Sironi*, Mark H. M. Winands

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

General Game Playing (GGP) aims at creating computer programs able to play any arbitrary game at an expert level given only its rules. The lack of game-specific knowledge and the necessity of learning a strategy online have made Monte-Carlo Tree Search (MCTS) a suitable method to tackle the challenges of GGP. An efficient search-control mechanism can substantially increase the performance of MCTS. The RAVE strategy and its more recent variant, GRAVE, have been proposed for this reason. In this paper we further investigate the use of GRAVE for GGP and compare its performance with the more established RAVE strategy and with a new variant, called HRAVE, that uses more global information. Experiments show that for some games GRAVE and HRAVE perform better than RAVE, with GRAVE being the most promising one overall.
Original languageEnglish
Title of host publication2016 IEEE Conference on Computational Intelligence and Games (CIG)
PublisherIEEE
Pages309-316
Number of pages8
DOIs
Publication statusPublished - Sept 2016
Event2016 IEEE Conference on Computational Intelligence and Games (CIG) - Petros M. Nomikos Conference Centre, Santorini, Greece
Duration: 20 Sept 201623 Sept 2016

Publication series

SeriesIEEE Conference on Computational Intelligence and Games
ISSN2325-4270

Conference

Conference2016 IEEE Conference on Computational Intelligence and Games (CIG)
Country/TerritoryGreece
CitySantorini
Period20/09/1623/09/16

Keywords

  • CARLO TREE-SEARCH
  • STRATEGIES
  • OPERATORS

Fingerprint

Dive into the research topics of 'Comparison of Rapid Action Value Estimation Variants for General Game Playing'. Together they form a unique fingerprint.

Cite this