In this paper enhancements for the Monte-Carlo Tree Search (MCTS) framework are investigated to play Ms Pac-Man. MCTS is used to find an optimal path for an agent at each turn, determining the move to make based on randomised simulations. Ms Pac-Man is a real-time arcade game, in which the protagonist has several independent goals but no conclusive terminal state. Unlike games such as Chess or Go there is no state in which the player wins the game. Furthermore, the Pac-Man agent has to compete with a range of different ghost agents, hence limited assumptions can be made about the opponent's behaviour. In order to expand the capabilities of existing MCTS agents, five enhancements are discussed: 1) a variable depth tree, 2) playout strategies for the ghost-team and Pac-Man, 3) including long-term goals in scoring, 4) endgame tactics, and 5) a Last-Good-Reply policy for memorising rewarding moves during playouts. An average performance gain of 40,962 points, compared to the average score of the top scoring Pac-Man agent during the CIG'11, is achieved by employing these methods.
|Title of host publication||2012 IEEE Conference on Computational Intelligence and Games, CIG 2012|
|Number of pages||8|
|Publication status||Published - 2012|
Pepels, T., & Winands, M. H. M. (2012). Enhancements for Monte-Carlo Tree Search in Ms Pac-Man. In 2012 IEEE Conference on Computational Intelligence and Games, CIG 2012 (pp. 265-272) https://doi.org/10.1109/CIG.2012.6374165