Google Teaches An AI To Beautify Street View Panoramas


Google has put together a deep learning based AI program that's made to hunt down beautiful scenery panoramas from Google Maps, stitch them together just right, then apply post-processing effects to get them as close to professional quality as possible. The AI looks for a number of qualifications in a given landscape before heading there virtually to scan for good views. Once it's found a good panorama, it patches that panorama up to look as clean as possible, then once the whole picture is stitched together, the AI applies a wide range of post-processing effects to beautify it. Since beauty is extremely human and subjective, two things that AI programs don't typically do too well with, the way things work under the hood is far more complicated than that.

For starters, the system was fed a number of professional photographs for training data, without much else. There were no labels, no additional code defining optimal aesthetic attributes, no indicators on the photos, or any other sort of help. From there, the AI was taken across multiple panoramas in Google Maps and Street View, around 40,000 to be precise, and was taught to pick through them for visually striking features. After the photos have been picked, the bot compared them to the professional photo data set as it worked, in order to figure out what common elements the photos had, and how to enhance the raw panoramas to look more like professional photographs.

The process that led to the artistic breakthrough included a number of smaller breakthroughs, and even saw the AI and its trainers working together on some aspects of the project; while helping the AI to learn about dramatic lighting, its attempts to mimic what it saw in professional photos resulted in the creation of a new postprocessing effect that Google dubbed "dramatic mask'. The results of the experiment, according to Google, even managed to impress actual professional photographers. The training centered around the use of a generative adversarial network, wherein the AI was given negative samples of chosen landscapes, and asked to figure out how to put them back to normal within the context of a fixed set of photography tools and compare the two.


Share this page

Copyright ©2017 Android Headlines. All Rights Reserved.

This post may contain affiliate links. See our privacy policy for more information.
Senior Staff Writer

Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, Voice assistants, AI technology development news in the Android world. Contact him at [email protected]

View Comments