This was the final project for CS 231A, a class on computer vision techniques.
Gaze-tracking is a useful technology that allows new modes of human-computer interaction. However, the image quality of current web cameras and the need for real-time image processing makes specialized hardware the norm in commercial offerings. Here, we use a machine-learning approach with visual features primarily based on Hough transforms to estimate the user’s gaze position using normal webcam hardware. Our successful experiment, with a median error of 3.22° in determining the user’s gaze position, validates this approach.
This is still a work-in-progress, since there are quite a few other features that I'd like to add that would probably increase the accuracy of the gaze-tracker.
If you are interested in learning more about this project, you can read the full report below, and also check out the demo with the source code. You need the following package versions (or later) to run the demo: OpenCV 2.4.8, NumPy 1.6.1, Scikit-Learn 0.14.1.