Playout policy adaptation (ppa) is a state-of-the-art strategy that has been proposed to control the playouts in monte-carlo tree search (mcts). Ppa has been successfully applied to many two-player, sequential-move games. This paper further evaluates this strategy in general game playing (ggp) by first reformulating it for simultaneous-move games. Next, it presents five enhancements for the strategy, four of which have been previously successfully applied to a related mcts playout strategy, the move-average sampling technique (mast). Experiments on a heterogeneous set of games show three enhancements to have a positive effect on ppa: (i) updating the policy for all players proportionally to their payoffs instead of updating only the policy of the winner, (ii) collecting statistics for n-grams of moves instead of single moves only, and (iii) discounting the backpropagated payoffs depending on the depth of the playout. Results also show enhanced ppa variants to be competitive with mast for small search budgets, and better for larger search budgets. The use of an \(\epsilon \)-greedy selection of moves and of after-move decay of statistics, instead, seem to have a detrimental effect on ppa.keywordsmonte-carlo tree searchplayout policy adaptationgeneral game playing.
|Title of host publication||Monte Carlo Search |
|Subtitle of host publication||MCS 2020|
|Publication status||Published - 16 Oct 2021|
|Series||Communications in Computer and Information Science|