The aim of this work is to analyse, the fitness landscape described by our testbed game for high-level Real-Time Strategy (RTS) games. We wish to identify Whether a, dominant strategy exists or an evolutionary cycle of best strategies. We briefly discuss the need for better Al in RTS games and how, we believe, techniques used in board-game research call be applied to this domain. We then outline a, multiagent system (MAS) based player for this games and Our use of genetic programming to,allow our to learn strategies for the game. We perform two seperate runs of co-evolution for this player and then analyse, the evolutionary history of both runs to help understand the fitness landscape of our coordination problem and identify important aspect of these successful and robust strategies.