There is a need to standardize the performance of eye gaze estimation (EGE) methods in various platforms for human computer interaction (HCI). Because of lack of consistent schemes or protocols for summative evaluation of EGE systems, performance results in this field can neither be compared nor reproduced with any consistency. In contemporary literature, gaze tracking accuracy is measured under non-identical sets of conditions, with variable metrics and most results do not report the impact of system meta-parameters that significantly affect tracking performances. In this work, the diverse nature of these research outcomes and system parameters which affect gaze tracking in different platforms is investigated and their error contributions are estimated quantitatively. Then the concept and development of a performance evaluation framework is proposed-that can define design criteria and benchmark quality measures for the eye gaze research community.