A crucial task in the calibration and validation of geosimulation models is to measure the agreement between model and reality. In recent years many map comparison methods have been developed for this purpose. This paper presents a framework to systematically assess different aspects of model performance and express the results relative to a common reference level. Application on a constrained cellular automata model of the Netherlands demonstrates that the framework gives an in-depth account of model performance. It also shows that any performance assessment that does not follow a multi-criteria approach or lacks a reference level results in an unbalanced account and ultimately false conclusions.
|Series||Lecture Notes in Computer Science|