While over the years my keyboard skills have gone way beyond simple hunt and peck, there have been a few occasions when I wish I’d listened to Mrs. Crabtree in ninth grade when she advised everyone to take touch typing. One of those occasions was when I saw a piece by Andrew Liszewski on “invisible keyboards” over on Gizmodo a while back.
Fujitsu has a prototype that “uses the tablet’s camera to track your finger movements on a desk, as if you were typing away on an invisible keyboard.” This removes the need to have to surrender so much screen area to the keyboard and/or to carry around a keyboard with you, which more or less defeats the purpose of a tablet.
Since everyone has different shaped, sized, and colored hands, the software has to spend a bit of time learning how a user types before there’s any kind of useful accuracy. But when school’s over, it does appear to be a plausible alternative to a physical keyboard.
I think that this would take some getting used to, especially if, like me, you do occasionally have to look down to see where the @ sign is – but I’ve got to say that this is one of the niftiest applications of machine vision that I’ve seen – even if I never get to use it. This application appears to be using a single camera, but I think it might perform better with a stereo approach. This would allow for more complex gestures. Maybe you could even have a keyboard in the air!
As for touch typing, who knew that Mrs. Crabtree was right all along?
Meanwhile, not to be outdone – okay, okay, I admit: with the invisible keyboard, we are outdone – Critical Link does some playing around on its own with both machine and stereo vision. Here’s a demo that we did of our stereo vision technology at Photonics West this past February: