In this paper we describe an evolutionary approach using models of human aesthetic experience to evolve expressions capable of generating real-time aesthetic analogies between two different artistic domains. We outline a conceptual structure used to define aesthetic analogies and guide the collection of empirical data used to build aesthetic models. We also present a Grammatical Evolution based system making use of aesthetic models with a heuristic based fitness calculation approach to evaluate evolved expressions. We demonstrate a working model that has been designed to implement this system and use the evolved expressions to generate real-time aesthetic analogies with input music and output visuals. With this system we can generate novel artistic visual displays, similar to a light show at a music concert, which can react to the musician's performance in real-time.