Google’s Glass is far from the first attempt to use a head-mounted contraption to communicate with the outside world, but it will certainly the first to be used by the masses as a means of supplying information to the wearer via a small heads-up display. Glass promises to be a fascinating device that will certainly change the way the user can view the world around them, but the one thing a Glass user cannot do, at this at this point, is allow the wearer to have any type of physical interface. In other words, the wearer of Glass cannot click, drag, manipulate, or navigate its content with hand gestures – what you see, is what you get.
Japan’s NTT DoCoMo displayed a prototype device at Japan’s Ceatec exhibition this past week that aims to correct that deficiency. Their headset or glasses are bulky, designed for the engineer rather than the consumer, but allowed Martyn Williams, a reporter from IDG News Service, to experience an almost holographic effect as he wore them during his demonstration.
This technology takes a flat surface, such as a notebook or page of writing paper and with a tap of your finger, turns the flat surface into a display. The device has two clear lenses like a pair of glasses, but the user also wears a small motion-sensing ring on the finger they will use for navigation. An image is “projected” onto the surface, be it a book to read or a movie to watch, that appears to be displayed on the page to the headset wearer, but to someone standing by, it just looked like a blank sheet of paper. Instead of speaking commands, the user is able to tap a “button” on the blank sheet of paper to produce the desired reaction, such as stop, pause, or play.
Another way to use the glasses is called space interface that allows the user to manipulate an “object” that appears in front of them, such as a rubber ball that you can reach out and pull or stretch it out of shape. A second demonstration had the user bounce a small bear up into the air or bat it from side-to-side – to the innocent bystander, it would appear that you were simply moving your arms and hands around in mid-air!
The last demonstration used augmented reality to provide the user with more information about what they were viewing. For instance, if a person approached you, the device would use facial recognition to identify them and then project their name and information about the person – great for parties when you cannot remember a person’s name, but very important for a business meeting.
The glasses really sound impressive, although there are no immediate plans to turn these into commercial production. As we have seen from Google’s approach, it take many months of testing and refining before a product of this nature can be available for retail sales. Who knows, by the time Glass is ready to go into mass production you may be able to purchase a finger sensor to attach to your Glass to perform these very functions.