Google, MIT Create Algorithms To Improve Mobile Photography

Google and scientists from the Massachusetts Institute of Technology have developed new algorithms using machine learning that are intended to help improve photos taken with a mobile phone. MIT researchers are set to present their research with Google at the Siggraph digital graphics conference this week, showcasing software that enhances an image in real-time by applying a number of improvements like boosting the contrast, balancing the hue and saturation, or increasing the sharpness. The software performs these and other photo enhancements before you actually hit the shutter button and capture the image. The machine-learning system works to analyze data from a series of training tasks that it takes, with each task representing thousands of unprocessed photos paired with retouched images.

In performing the training for the machine-learning system, the researchers used a data set that comprises 5,000 retouched images created by Adobe Systems and the MIT researchers. The researchers conducted the image processing on a photo with low resolution to save energy, though this work compromised the color values of each pixel on the image in the resulting high-resolution image. To address this issue, the researchers employed two methods, which include producing a set of formulae as the output instead of an image, and figuring out a way to apply the set of formulae to the individual pixels in the resulting image with high resolution. The formulae are meant to alter the colors of the pixels in the original image and the machine-learning system is assessed during training based on those formulae’s closeness to the retouched image. Performing the image processing on a low-resolution image also helps to save time. When researchers conducted the processing, it was found that the low-resolution image required 100 megabytes of memory to perform the operations, whereas a high-resolution version required nearly 12 gigabytes, which means it would take longer to process a full-resolution image as the standard algorithm, according to MIT.

The new collaboration with MIT reaffirms Google's interest in advancing AI as a tool to bring improvements to photos. Last month, the Mountain View, California-based company introduced an AI program meant to find panoramas from Google Maps and post-process the resulting image for a more professional quality. It remains unclear as to which product Google intends to apply the system, but Jon Barron, one of the MIT researchers, said the machine-learning system could offer a method to solve computational and power limitations of mobile devices in order to produce what he described as “real-time photographic experiences” that do not drain the battery and cause the viewfinder to slow down.

You May Like These
More Like This:
About the Author
2014/05/Manny-Reyes-headshot.jpg

Manny Reyes

Staff Writer
A big fan of Android since its launch in 2008. Since then, I've never laid my eyes on other platforms.
Android Headlines We Are Hiring Apply Now