Google's DeepMind Makes The First Sequentially Learning AI

Google's DeepMind unit initially rose to fame in the tech world by creating the AlphaGO AI and beating Go world champion Lee Sedol with it, but their newest feat, creating the world's first AI that truly learns and applies knowledge over time, is almost certain to eclipse that in the history books. For a very long time, a goal of AI has been to replicate the way the human mind works, as demonstrated by neural networking and machine learning. The problem with those setups is that they could usually only learn a single task or small subset of related tasks well. The newest creation from DeepMind can learn sequentially and outward, much like a human; if the AI learns how to play Pacman, for example, it can apply its knowledge of video game basics to learn how to play The Legend of Zelda, and eventually more complex fare like Gran Turismo and Front Mission.

While the concept at work here is groundbreaking for AI, the program is still more limited in its use of recall and synthesis than the average human. Rather than having a general knowledge pool to draw upon that includes a number of basic lessons, instead having to look back to a related piece of knowledge in order to have a leg to stand on for a new experience. In order to test the new AI, the DeepMind team set it to a number of Atari games back to back, with the special algorithm known as Elastic Weight Consolidation in hand. Using that algorithm, the AI figured out how to play each game by watching the score increase, then applied its new skills to the next game successfully.

The whole thing is based on neuroscientific studies that show how the neural pathways in animals work when learning. Essentially, the pathways forged and used for one task have to be left open for a related task. Accomplishing this with an AI is difficult due to the virtual, electronic nature of the "brain" involved, but it is not impossible. While the DeepMind team may have figured out how to keep pathways active, the level of learning is not quite up to par with human learners yet.

You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now