Google Debuts Neural Model For Teaching AI To Draw

Google’s work with machine learning and neural networking is dipping into the creative again, this time with respect to the simple art of doodling, using a new model called "sketch-rnn." Drawing ambiguous, almost featureless stick figures and circular animal faces from memory is a natural talent for most humans, but AIs need a little help that Google is now looking to offer. They’ve been working on a variety of neural network training models recently with the goal of essentially teaching a robot how to not just dryly reproduce a drawing, but to understand it, learn from it, and even be able to synthesize an entirely new drawing based on the original piece. Thus far, the experiment seems to be going pretty well.

Simply speaking, Google isn’t just teaching robots to draw but is seeking to teach them to draw like humans. The way that they’ve been trying to achieve that so far has varied, but normally boils down to feeding the model an initial sketch, then some extra information, and telling the robot that it cannot strictly reproduce the sketch and instead must create something unique. Essentially, they’re making the right conditions for neural networks to use machine learning to engage in the same kind of recall and synthesis activities that are central to the processes of the human brain. One example of that is a model that was fed sketches of a pig. After it managed to replicate the pig, it was fed sketches of a truck and was able to successfully combine the two, which is a clear example of cognitive synthesis. Another model was taught to draw a cat and was then fed meaningless noise. Despite that, it managed to cut through the noise and produce a new cat drawing. While these achievements are pretty amazing on their own, Google is having the AI channel that synthesis into creative pursuits, and laying the groundwork for future AIs that could possibly write symphonies or create great works of art that would give humans immense insight into the essence of the arts, as well as the difference between organic and artificial cognition.

Google’s neural models and machine learning algorithms are able to perform an incredible number of functions, and they’re even working on a project that has the goal of mimicking the human brain as closely as possible. A humanlike, self-aware AI is no longer strictly in the realm of science fiction; Google has even acknowledged this in the past by saying they are taking steps to ensure that adequate protections are in place to prevent any sort of adverse consequences, like AI making decisions detrimental to humans.

You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now