Google's DeepMind unit initially rose to fame in the tech world by creating the AlphaGO AI and beating Go world champion Lee Sedol with it, but their newest feat, creating the world's first AI that truly learns and applies knowledge over time, is almost certain to eclipse that in the history books. For a very long time, a goal of AI has been to replicate the way the human mind works, as demonstrated by neural networking and machine learning. The problem with those setups is that they could usually only learn a single task or small subset of related tasks well. The newest creation from DeepMind can learn sequentially and outward, much like a human; if the AI learns how to play Pacman, for example, it can apply its knowledge of video game basics to learn how to play The Legend of Zelda, and eventually more complex fare like Gran Turismo and Front Mission.
While the concept at work here is groundbreaking for AI, the program is still more limited in its use of recall and synthesis than the average human. Rather than having a general knowledge pool to draw upon that includes a number of basic lessons, instead having to look back to a related piece of knowledge in order to have a leg to stand on for a new experience. In order to test the new AI, the DeepMind team set it to a number of Atari games back to back, with the special algorithm known as Elastic Weight Consolidation in hand. Using that algorithm, the AI figured out how to play each game by watching the score increase, then applied its new skills to the next game successfully.
The whole thing is based on neuroscientific studies that show how the neural pathways in animals work when learning. Essentially, the pathways forged and used for one task have to be left open for a related task. Accomplishing this with an AI is difficult due to the virtual, electronic nature of the "brain" involved, but it is not impossible. While the DeepMind team may have figured out how to keep pathways active, the level of learning is not quite up to par with human learners yet.