IBM Bakes Neural Networking Into New Silicon, Could Help Boost AI


Research conducted by IBM on the use of A.I.-specific chips with portions of a neural net built into the hardware may lead to A.I that’s up to 100-times more efficient than current technologies. That’s based on a new research paper published by the company’s researchers in Nature, which shows how mapping a neural network on to non-volatile memory can augment the process of creating A.I. hardware. It goes without saying that the efforts involved here will still take quite some time to accomplish. As of this writing, that hardware currently consists of no fewer than five transistors and at least three other major components. That’s as compared to a single transistor on a typical computer chip. Beyond that, a substantial portion of the researchers’ tests has only been conducted mathematically rather than in the real world. That means there’s still quite a lot of work left to be done.

Bearing that in mind, mapping a neural network onto the hardware itself takes a lot of load off of the software normally housing A.I. and this does signal a first step towards making such chips a reality. The improvements to efficiency would likely also improve performance, as is almost always the case with computer components. In this case, that means a significant boost to the speed at which an A.I. could be taught and learn. That wouldn’t necessarily be 100-times better but it would be substantial. What’s more, the tests conducted by the researchers showed that accuracy of the hardware-based neural network is equivalent to that found with pure software implementations of the technology.

Taking matters further still, the creation of machine-learning hardware could be a boon to a wide variety of other companies. Google, in particular, is a great example of that since it has spent the last several years working to incorporate its own A.I. into smartphones using cloud and edge networking. If IBM can successfully create a fully operational neural networked chip, those will probably follow the trend of other chips and ultimately be small enough to embed in smartphones. That could mean massive improvements for A.I.-driven smartphone enhancements ranging from more general offline machine learning to better photography.