AlphaGo Can Now Teach Itself How To Play Chess, And More

Google's DeepMind unit has created a new version of its famous AlphaGo artificial intelligence program, along with a special program and accompanying algorithm that allow it to be fed the rules or parameters of something like a game, then teach itself how to achieve superhuman performance on that front without any human intervention. Previous versions of AlphaGo required humans to specify a goal and provide at least some data before being able to begin showing meaningful improvement in performance. This version eliminates that, with the testing being talked about in the white paper from DeepMind being self-teaching to superhuman levels in Chess and Shogi after being fed only the rules of each game. The AI managed to do this in only 24 hours.

AlphaGo Zero uses a vast array of AI conventions in its self-enhancement, perfected over decades of AI research. The AI's base programs are actually packed more with AI programming smarts than general smarts. While not built as such, it's arguable that this version of AlphaGo can be defined as an artificial general intelligence, since it is able to take to a number of different tasks and self-enhance over time. Theoretically, AlphaGo Zero could improve itself into infinity across all of its cognitive and concrete mastery domains, given sufficient time, processing power, and nodes.

The implications of this development are nearly infinite. Essentially, using these principles, AI programs could potentially improve their own capabilities exponentially with nothing more than a broad or narrow goal in mind and access to enough computing power and nodes to run a large number of simulations. This development does not open the door to AI programs that can learn convincing emotions or learn to create smarter and smarter AI in its own image or better per se, but these things are within AlphaGo Zero's purview, if those in control of it decide to set it to such tasks. Naturally, with all of the doomsday talk surrounding AI these days, centering around a call to caution from Tesla CEO Elon Musk, there are countless protections in place to prevent the AI from doing anything that its creators and users don't want it to do.

You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now