For the most part, tech companies normally build and optimize hardware by manufacturing the shell themselves and sourcing the parts from others. Your phone may have a Qualcomm processor while your computer may have a RAM chip made by SK Hynix. There are those, of course, who manufacture more than just the shell of their devices, such as Huawei and Samsung, and go as far as creating their own CPUs to work as harmoniously as possible with their hardware. It would seem that Google is among the latter crowd. An announcement today revealed the existence of the TPU, or Tensor Processing Unit. A highly specialized processor of sorts, the custom unit made by Google is able to rival performance estimates for processors three generations, or about seven years, from now, in certain applications. That same incredible chip, according to today’s announcement, will be integrated into Google Cloud Platform for customers to use as they please.
Named after Google’s TensorFlow backend, a TPU may not be the ideal chip to power your game of Crysis 3, but if you’re programming an A.I., engaging in machine learning and neural network building or doing anything else that requires incomprehensibly huge amounts of number crunching, the TPU is your best bet. Up until now, the existence of the TPU has been a carefully guarded secret. The specialized units can be found inside of Google’s data centers and the hardware for the legendary AlphaGO A.I., which figured out the ancient Chinese game of GO and bested world champion Lee Sedol in a 4 to 1 game upset that left A.I. authorities speechless.
The kicker with the TPU is that it’s tailored to machine learning by being given a higher tolerance for the kind of imprecise computations that would drive a normal CPU up a wall in short order. This tolerance for chaos allows the TPU to handle a high-density barrage of commands and computations that today’s CPUs and GPUs, requiring data to line up neatly, could only dream of. Google’s blog post didn’t mention how long TPUs have been in development, but the post did say that they went from the prototyping phase to full integration into Google’s data centers in a mind-blowing 22 days. Google plans to use the chip to clinch the nascent machine learning industry, then make that power available to their customers, on top of using it in their own applications. Such incredible power may just allow Google to edge out their competition in the IaaS market.