A Continuous Information Gain Measure to Find the Most Discriminatory Problems for AI Benchmarking

Matthew Stephenson*, Damien Anderson, Ahmed Khalifa, John Levine, Jochen Renz, Julian Togelius, Christoph Salge

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

1 Citation (Web of Science)

Abstract

This paper introduces an information-theoretic method for selecting a subset of problems which gives the most information about a group of problem-solving algorithms. This method was tested on the games in the General Video Game AI (GVGAI) framework, allowing us to identify a smaller set of games that still gives a large amount of information about the abilities of different game-playing agents. This approach can be used to make agent testing more efficient. We can achieve almost as good discriminatory accuracy when testing on only a handful of games as when testing on more than a hundred games, something which is often computationally infeasible. Furthermore, this method can be extended to study the dimensions of the effective variance in game design between these games, allowing us to identify which games differentiate between agents in the most complementary ways.

Original languageEnglish
Title of host publication2020 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC)
PublisherIEEE
Pages1-8
Number of pages8
ISBN (Print)9781728169293
DOIs
Publication statusPublished - Jul 2020
Event2020 IEEE Congress on Evolutionary Computation - Online, Glasgow, United Kingdom
Duration: 19 Jul 202024 Jul 2020
https://wcci2020.org/

Publication series

SeriesIEEE Congress on Evolutionary Computation

Conference

Conference2020 IEEE Congress on Evolutionary Computation
Abbreviated titleIEEE CEC 2020
Country/TerritoryUnited Kingdom
CityGlasgow
Period19/07/2024/07/20
Internet address

Keywords

  • General Video Game AI
  • Information Gain

Cite this