Code for machine learning is done in algorithms. Ideally, it should be elegant, beautifully efficient and, if it’s doing its job well enough, you shouldn’t even notice it’s there while the machine is doing its thing. The only trace of the code that should be present is what the code is there for; to enable the machine to learn and think. A certain class of historical warriors were quite similar; they were brutally efficient, worked elegantly, and kept their presence hidden. One could only venture a guess that they had been there, if they did their job well, and a nebulous guess at that. That historical warrior class, of course, is ninja. With the similarities stacked so, it should come as no surprise that Google’s newest boot camp for its elite coders to hone their prowess in the fine art of machine learning would be named after the shadow of Japanese history; the simply and aptly named Machine Learning Ninja program.
Coding for machine learning is done in algorithms. Rather than outright instructions, algorithms essentially teach computers to think in terms of “if”, reading environmental variables to figure out which instruction set or data set to call up, or which function to perform. For example, if you wanted to teach a robot to play the guitar using machine learning, you could put it in front of a computer with an all live guitar YouTube channel going and tell it to watch the humans’ hands. You’d first have to teach it how to identify hands and guitars, and tell it what the end result should be like; something like, “I want you to play like a cross between Jimi Hendrix and Derek Liu”, but in computer language and using examples. While in the past, guided learning may have been needed, most AI are capable of figuring out what they should watch for and learning it on their own thanks to the magic of neural networks, computer networking setups that mimic the way a human brain works. The fact that Google is literally training a crack task force for exactly this sort of specialized thing speaks volumes about their mindset.
According to Google CEO Sundar Pichai, machine learning is the future; it has already played some role in a good amount of Google and Alphabet’s products, and its role will only be ramped up from here. For some examples, we can look to this year’s Google I/O. AI featuring machine learning were at the core of just about everything shown. Self-driving cars use machine learning and a data network between them that shares the load and assigns themselves and each other roles; sounds a lot like a neural network. The cars “crowdsource” the driving data gathered from one another to build their collective understanding of the road. A new Assistant platform was also announced, using machine learning to figure out, pre-emptively, how best to help its human users. Much like Google Now “learns” your routines and interests, Assistant aims to figure you out and offer you what you need and want before you know you need and want it, on top of knowing exactly what you mean when you issue a command. This is made possible by machine learning based on a neural network, making it potentially far more powerful than Google Now could ever be, so long as everything goes well with its development and adoption. Naturally, Google’s core Search product is also chock full of AI based on machine learning and neural networks; user trends, search term trends, demographics and site popularity in search results are just a few pieces of data used to help find search results. Advertising, meanwhile, is taken care of by an increasingly sophisticated method of identifying individual users and getting to know them. These are, of course, only a few examples of Google’s commitment to machine learning.
On the extreme end of the spectrum, a bot known as AlphaGO, child of Alphabet-owned DeepMind, flexed its muscles last year when it beat GO world champ Lee Sedol, proving that an AI is capable of playing the ancient and complicated game on level with any human; a feat previously thought impossible was practically trivialized when AlphaGo stomped Sedol and other firms with interest in AI, like Facebook, began to make claims that they could produce something to rival or even best AlphaGO. Just like that, a milestone in computing was left in the dust. That incredible progress is the hallmark of Google’s approach to just about anything, but their newfound interest in AI is incredibly important. The future of mainstream computing, as most see it, can go one of three ways; users can continue to use increasingly powerful devices to do a wider range of activities in a wider range of ways, much like the current VR push. Something considered a high-end gaming PC right now, requiring oodles of electricity and sporting parts worth thousands, will likely be dwarfed by something fitting in the palm of a user’s hand in the next decade or so. A second possibility is the rise of cloud computing, where all or most devices that users touch “phone home” to a group of centralized, uber-powerful servers that do all of the heavy lifting for them, including storage. The third possible future is one where just-adequate power in user devices is backed by incredibly powerful neural networks bearing sophisticated AI based on machine learning; each user would have their own personal, extremely powerful AI that could even pass a Turing Test, a situation quite familiar to fans of a certain classic videogame icon robot’s adventures on a handheld from the early 2000s. Whatever the outcome, Google is pushing hard on all fronts to ensure that they will remain relevant.