Eye-tracking is an amazing accessibility tool, but there are so few ways to figure out your best setup to get the best results.

There are currently no good tools, that I could find, to test the accuracy and consistency of eye-tracking mouse control across devices and programs, and I believe this information will greatly help in improving the technology in the future.

A fairly simple Python or JavaScript program should be able to provide this need:

The script starts by prompting the user to enter a seed value, which is used to generate a set of 25 random dots on the screen. The dots are generated one at once but does not show until the user clicks "test". Each dot is then displayed until a click attempt is made, then disappears and the next dot appears.

The script will display a progress bar and the number of dots completed vs. total for each test as well as a timer.

Once all 25 click attempts were made, the script then prompts the user to enter a name for the test and stores all the data in a table.

The script asks if they want to repeat the seed, do a test with a new seed, or if they are done.

If the user chooses to repeat the test with the same seed value, the script will generate the same set of 25 random dots and prompt the user to click on them again. The script will group by dot position in the table, showing each seed's dot data next to each other.

All data collected will be recorded in a table, including:

The seed value used for each test

The position of each dot on the screen for each test

The position of each click attempt on the screen for each test

The distance between the center of each dot and the click location for each test

The time taken to click on each dot for each test

The total time taken to complete each test


Project Activity

This project doesn't have any activity yet, post an update or log some buzz!