We’ve been hearing info on and off about Google Glass, an internet display in the form of glasses which display images and data in front of the user’s view, for several months now. We’ve seen skydivers and models showing off Glass and in a few weeks, developers will be able to score one of their very own. There are not many apps yet for the hardware but Google hopes to change that with plans to offer developers the same cloud-based API that they use to test out apps like their calendar. Google Glass offers a completely new, mostly unexplored interface that is unlike interacting with a phone, computer or anything else people have grown used to using a certain way. Developers are likely to experience problems with writing apps for this new hardware. The $1500 price also has many developers wary but hopefully the final consumer version, expected to ship early 2014, will cost less.
The smart glasses will let the wearer record conversations and store via cloud for later recall, auto-take pictures and is connected to the Internet through Wi-Fi or by using a smart phone’s Bluetooth. The glasses will come equipped with all the same electronics a typical smartphone would, such as a memory chips, micro-processor, speaker, microphones, video camera, accelerometer, gyroscope and compass. In an interview with IEEE Spectrum, Babak Parviz, Project Glass leader, claims Google Glass accesses information “so fast that you feel you know it.”
We wanted to have a device that would do two things that we think would be useful for a lot of people. One is to have a device that would allow for pictorial communications, to allow people to connect to others with images and video. Right now, we don’t have any devices that are specifically engineered to connect to others using images or video. So we wanted to have a device that would see the world through your eyes and allow you to share that view with other people. The second big goal was to have a technology that would allow people to access information very, very quickly. So when you have a question, you can very rapidly get to the answer.
It isn’t exactly augmented reality, but that isn’t the immediate goal of Project Glass. Prototypes now allow the user to access data on the device with a touch pad and voice commands. Parviz also mentioned experimenting with head gesture control. Parviz seemed particularly careful during the interview not to over promise or give too much away.
Steve Mann, MIT graduate and personal smart glasses building expert, expressed concern over the flaws of Google Glass. The design is similar to an early generation of smart glasses that he design which had the side effects of disorientation, confusion and some “unpleasant flashbacks” when taken off. He explains that they should be using a design similar to his “generation four” glasses which uses a laser device to use the eye itself as a camera and display.
Even science fiction writer Bruce Sterling, (best known for his Mirrorshades anthology) isn’t too impressed with the smart glasses. “I’m not hugely interested in Google Glass because, although I’m very keen on augmented reality, that’s not what Google Glasses are. Google Glasses are more like a head-mounted Android unit, and there’s not much in the way of live interaction with 3-D virtual images.”
Google may have some trouble if they can’t even impress a sci-fi writer but the project is still in the early stages. In the early stages of the Android/Google takeover, most weren’t too interested either until they realized how the smartphones enhanced everyday life. This isn’t likely to be any different.
Google Glass seems really exciting and I’m certainly interested, though as someone who has issues with wearing glasses or even hats that enter into my field of vision, I am also unsure if I would be willing to wear them. Would you be interested?