### Abstract

In a zero-sum limiting average stochastic game, we evaluate a\nstrategy π for the maximizing player, player 1, by the reward φ\n

_{s}(π) that π guarantees to him when starting in state s.\nA strategy π is called non-improving if\nφ_{s}(π)⩾φ_{s}(π[h]) for any state s\nand for any finite history h, where π[h] is the strategy π\nconditional on the history h; otherwise the strategy is called\nimproving. We investigate the use of improving and non-improving\nstrategies, and explore the relation between (non-)improvingness and\n(ε-) optimality. Improving strategies appear to play a very\nimportant role for obtaining ε optimality, while 0-optimal\nstrategies are always non-improving. Several examples are given to\nclarify all these issuesOriginal language | English |
---|---|

Title of host publication | Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171) |

Pages | 2674-2679 |

Number of pages | 6 |

DOIs | |

Publication status | Published - 1998 |

### Publication series

Series | Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171) |
---|---|

Volume | 3 |

## Cite this

Flesch, J., Thuijsman, F., & Vrieze, O. J. J. (1998). Improving strategies in stochastic games. In

*Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171)*(pp. 2674-2679). Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171), Vol.. 3 https://doi.org/10.1109/CDC.1998.757857