Monte Carlo *-Minimax Search

Marc Lanctot, Abdallah Saffidine, Joel Veness, Christopher Archibald, Mark H M Winands

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

This paper introduces Monte Carlo *-Minimax Search (MCMS), a Monte Carlo search algorithm for turned-based, stochastic, two-player, zero-sum games of perfect information. The algorithm is designed for the class of of densely stochastic games; that is, games where one would rarely expect to sample the same successor state multiple times at any particular chance node. Our approach combines sparse sampling techniques from MDP planning with classic pruning techniques developed for adversarial expectimax planning. We compare and contrast our algorithm to the traditional *-Minimax approaches, as well as MCTS enhanced with the Double Progressive Widening, on four games: Pig, EinStein W\"urfelt Nicht!, Can't Stop, and Ra. Our results show that MCMS can be competitive with enhanced MCTS variants in some domains, while consistently outperforming the equivalent classic approaches given the same amount of thinking time.
Original languageEnglish
Title of host publicationProceedings of the 23rd International Joint Conference on Artificial Intelligence
Pages580-586
Number of pages7
Publication statusPublished - 2013
EventTwenty-third International Conference on Artificial Intelligence - Beijing, China
Duration: 3 Aug 20139 Aug 2013
http://ijcai-13.org/

Conference

ConferenceTwenty-third International Conference on Artificial Intelligence
Abbreviated titleIJCAI-13
Country/TerritoryChina
CityBeijing
Period3/08/139/08/13
Internet address

Cite this