MIT's New AI Chip Reduces Power Consumption By Up To 95%


Researchers at MIT have created a new neural networking chip that massively reduces the power consumption while still significantly boosting the speed of processing. In fact, according to a report published by the Massachusetts-based institution's own news organization, the chip can complete neural network computations at anywhere between three to seven times the speed of a traditional neural processing chip. That would be an accomplishment on its own but the researchers say that their chip also cuts back on the amount of energy needed to make those computations by between 93 and 95-percent. The MIT chip is able to accomplish those feats, according to the source, by being more closely representative to the way electrical signals are stored and processed in the human brain.

Taking a closer look at what that means, it is immediately clear that MIT's chips don't function like a normal computer chip. The more general approach to processing is to store memory and move data between that partition and a separate processing unit. The data is bused back and forth as it is processed. That may actually be too complex for what algorithms associated with A.I. and neural networking are trying to accomplish, according to Avishek Biswas – the MIT graduate student who led the chip's development. Going further, Biswas says that the machine learning does require a large number of calculations but that the algorithms can be simplified into dot product operations. So, rather than sending the data back and forth between components, the new chips implement the dot-product functionality into the memory itself and that reduces energy consumption. Moreover, that same concept is why it is able to be faster while maintaining an accuracy within two to three percent of a conventional, multi-layered neural network. The prototype chip can calculate dot products for as many as 16 nodes in one step, without having to move the data back and forth.

For now, the immediate implication of the research is that A.I. chips for smartphones and other battery-powered technology – as well as in other edge computing circumstances – could be made much more powerful while reducing the amount of drain they represent on a battery. However, because this is a prototype, it will also likely require more testing and experimentation in terms of scalability before its potential can be fully realized. With that said, it has caught the eye of at least one prominent figure in the tech world. Dario Gil, who currently acts as vice president of artificial intelligence at IBM, says that that the chip represents a "promising real-world demonstration of SRAM-based in-memory analog computing for deep learning," adding that the advancement could open up the door to more complex implementations where image and video classifications in the fields of IoT are concerned.

Share this page

Copyright ©2018 Android Headlines. All Rights Reserved.

This post may contain affiliate links. See our privacy policy for more information.
Junior Editor

Daniel has been writing for AndroidHeadlines since 2016. As a Senior Staff Writer for the site, Daniel specializes in reviewing a diverse range of technology products and covering topics related to Chrome OS and Chromebooks. Daniel holds a Bachelor’s Degree in Software Engineering and has a background in Writing and Graphics Design that drives his passion for Android, Google products, the science behind the technology, and the direction it's heading. Contact him at [email protected]

View Comments