In the past, a few handsets have had gimmicky "3D" cameras that use a dual perspective to provide the illusion of 3D. The LG Thrill and HTC Evo 3D are among the first of many that comes to mind. Some of you may be aware, however, that the technology to accurately scan a three dimensional object, at least the surface of it that faces the camera, has existed for quite some time in the form of the Microsoft Kinect. Google's Project Tango applies this tech and a few other rumors have floated around regarding devices that may also have it on board. The Kinect was not without its issues, but after flopping in gaming, it became a staple of the science and tech worlds. A few brilliant minds at MIT have gone ahead and fixed one of the Kinect's glaring flaws using a fairly revolutionary hack.
To give some background, a phenomenon called polarization is responsible for some of the tricks light can play, including making Google's self-driving cars a lot less reliable in rainy or snowy conditions. Polarization is the splitting off of light into different wavelengths as it bounces around, which can become confusing for 3D sensors because there are ways light can bounce around that make it nearly impossible to determine the source and orientation. If a polarized lens or sensor knows the origin of polarized light, it can cut out that noise and get a much clearer picture, which of course could translate into a higher resolution scan, in 3D imaging. 3D imaging tools that sense depth can be used to defeat this effect, making the pairing that MIT has come up with a mutual relationship.
The new system, called Polarized 3D, uses polarization to help a 3D image sensor to more accurately capture objects, down to about a thousandth of a centimeter. The way it works is that the 3D sensor pulls depth data on an object, which helps the polarization sensor or lens to figure out what the source and orientation of a polarized beam of light may be. In turn, this cuts out the noise from polarized light not originating from the object that the 3D image sensor is pointing at. In experiments and using a setup that involved a simple polarized camera lens on a Microsoft Kinect, MIT researches were able to produce a more accurate 3D model than an industrial laser scanner.
In order to achieve this, multiple passes of an object had to be made and processed by comparison and in concert with special polarization algorithms. These algorithms, using the depth sensor, are able to figure out what light is part of the picture or object and what light is noise. From there, data sets from the multiple passes are combined to create a super-accurate representation of a three dimensional object.
This can be done in real-time using a decent graphics processor, the separate processor used for 3D applications and gaming. One as powerful as current gaming consoles could most likely get the job done seamlessly, allowing for almost instantaneous analysis and assembly of a capture into a super-detailed 3D scan. Mind you, a powerful GPU, roughly on level with a Sony PS3 or an XBox 360, is something smartphones have almost reached already. The Tegra X1 processor and insane GPU present in the NVIDIA SHIELD TV, for example, is roughly equivalent. With the pace of technological advancement, we could even see this happening before the close of the decade. In the mean time, current tech could likely grab the captures, decipher them, compare them and generate the scan in just a few seconds.
The way this ties into future smartphones is that, rather than a simple polarization lens that requires large mechanical movements, like MIT has used, it's possible to rig up thousands of tiny polarization lenses to a traditional 3D image sensor. They would sit as an overlay and cover most, if not, all available pixels, allowing them to be placed on the smaller, low-quality sensors that are able to be placed on a modern consumer device such as a smartphone or a tablet. In short, this means that the breakthrough MIT has made here will allow OEMs to outfit smartphones with 3D cameras that can scan a subject down to a thousandth of a centimeter from a few meters away. Grids of tiny polarized lenses that can overlay and multiply individual pixels in a sensor are already commercially available, so at this point, all OEMs would have to do is figure out how to implement the tech with their own 3D cameras and implement the proper algorithms into whatever GPU their hardware is using. Mind you, it would reduce the camera's resolution a bit by grabbing three pixels for the price of one, but not any more than current tech such as color sensors. The end result, in theory, would still be an incredibly detailed 3D scan of a real world object.
The applications for this mostly only exist in industry at the moment, potentially allowing construction firms to accurately scout a site, structure or piece for defects, as well as send an accurate three dimensional representation back to base for analysis if needed. That's only one example, of course. This new technology is also poised to help Google's self-driving cars get over their fear of precipitation, since rain and snow tend to create a sort of scattering of polarized light that normal 3D image sensors have a hard time deciphering. In the consumer space, given some time for developers to implement it, this could be quite useful for augmented reality applications. More uses will doubtlessly surface with time as smart devices take on different form factors and become more and more ubiquitous.