The code attached below is the beginning of the algorithm I mentioned in a previous note on gesture classification to substitute for a mouse. It turns out, after initial testing, there appears to be enough information in your posture to figure out what point on the screen you’re touching, even without having a camera or other sensor monitor your actual hands. Based upon preliminary testing, it seems the range is one inch, in that it can tell within one inch of where you’re actually touching along some horizontal line.
I’m going to redo the dataset, not only because I’ve used too few images, but also, frankly, because I look awful, but all you need to do is touch the four corners of the screen, and take pictures of yourself a few times in each position, which will define four classes of images, with a webcam mounted at the top-center of your monitor (e.g., I’m working on an iMac). I did a bonus class of touching the middle left and middle right of the monitor, and the accuracy was still perfect. I wore headphones in some classes and not others, and it didn’t matter. The plain implication being that you don’t need to see or monitor your hands, as long as you have enough information from your total posture, to know where you’re pointing. I’m going to do another full dataset where I use an on screen ruler to be more precise. If that works, then I think that’s it, and the algorithm is already certainly real time, even in Octave, and so I suspect, that when in written in a native Apple language, it will only be faster.
Discover more from Information Overload
Subscribe to get the latest posts sent to your email.