WESPE Project Has Neural Networks Improve Phone Photography


Neural networks hold a massive amount of possibilities, and the WESPE project's use of them to enhance smartphone photos to professional quality is a very good demonstration of just how wide their possible uses are. The project centers around a neural networking algorithm that uses five separate convolutional neural networks, all trained on thousands of photos of varying quality from all sorts of smartphones, to enhance smartphone photos. The enhancements make colors pop, bring out otherwise hidden details, tweak lighting, and change other values on a per-area basis all around the photo to achieve a powerful transformative effect.

The first neural network in the chain consists of twelve layers of its own, and is actually fully capable of processing and enhancing images. The issue is that it doesn't know how to do that. To fix that, four similar networks all take their turns processing images in different ways, using publicly available datasets and images captured by a menagerie of smartphones to work together and figure out how to do the image processing. The first network is able to observe this process in all of its detail and replicate it thanks to its multiple layers. Once the first layer processes an image with its newfound knowledge, the other networks take over and finish processing it together. In this way, with each image processed, the first network becomes gradually more independent, with the eventual goal being to cut out the other four networks to save resource usage.

The five networks have been trained using thousands of images during their developmental period, all from a range of sources. What makes this approach unique in this case is that the networks are "weakly supervised," meaning that rather than being shown two images per set to process together, one from a smartphone and one from a DSLR, the networks are simply fed a range of images from different sources and left to figure out what is different and ideal and why all on their own. Thanks to the multiple neural networks, as well as the multiple layers in the first network, this approach not only works well but is reproducible with almost any camera, meaning that this solution or something based on it may well show up in camera apps on the Play Store in the near future, so long as you have a device to run them on that's capable of native machine learning and simulation of neural networking.


Share this page

Copyright ©2017 Android Headlines. All Rights Reserved.

This post may contain affiliate links. See our privacy policy for more information.
Senior Staff Writer

Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, Voice assistants, AI technology development news in the Android world. Contact him at [email protected]

View Comments