AI Learns Much Faster Without Sample Data Redundancy – Report


New research from North Carolina State University's computer science division may lead to a breakthrough in the way AI is trained, reducing the time needed for an AI system to learn to recognize a given object. The researchers, led by NC State Ph.D. student Lin Ning alongside study co-authors Hui Guan and professor Xipeng Shen hit on the discovery when the team stumbled on a realization that seems somewhat obvious in hindsight.

The team realized that some aspects of the process of training a deep learning neural network are extraordinarily redundant. AI is effectively taught by grouping 'consecutive' pixel data into sets that a convolutional neural network can learn to use in order to seek out patterns. That process can now be shortened through a process the team calls 'Adaptive Deep Reuse', allowing training to be accomplished as much as 69-percent faster.

Better still, Adaptive Deep Reuse can accomplish the same tasks as those already used to train visual recognition-based AI systems without losing accuracy.


What's the difference?

The methods used for AI training already in real-world scenarios is central to the new Adaptive Deep Reuse process. The key difference between the two is the amount of data a deep learning algorithm needs to analyze. The team's most obvious example can be found when machine learning is examing images of an outdoor scene in order to recognize specific objects, such as people, within that scene.

By the currently used AI learning methods, a system may analyze millions of data samples showing outdoor scenes and there will be variances among those but certain elements will be almost universal and repetitive. In the above-mentioned scenario, one example of that would be the sky in the scenes provided to the algorithm.


Many if not all of the scenes being examined throughout the process, will contain the same patch of sky or one that is very similar. That similarity is often enough that the AI can filter out analysis of that portion of the sample data, reducing not only the processing power required but the time it takes to analyze data for the system to learn. The efficiency of Adaptive Deep Reuse can be further improved by implementing a locality sensitive hashing method.

Real-world impacts

The advancement embodied in the new method isn't fool-proof. The researchers have already undertaken efforts to discover thresholds for both the size of data chunks and similarity. Another adaptive algorithm was created throughout to adjust those on the fly, accounting for changes in those metrics while an AI is being trained.


None of that is likely to mean that Google, Amazon, or any other company would be able to immediately start implementing the method without making further adjustments to suit a given machine learning scenario. Each data set will have its own variations that need to be accounted for and use cases will necessarily require different standards in terms of accuracy.

For example, Google's vision-specific Lens AI is utilized for recognizing everyday objects, generating artsy selfies, or performing image searches and doesn't require a high level of certainty to ensure safety. The AI found in its sister company's technology, specifically for autonomous vehicle developer Waymo, requires a much more stringent approach since failure to recognize an object or properly analyze a scenario can result in a person being injured or killed.

Share this page

Copyright ©2019 Android Headlines. All Rights Reserved.

This post may contain affiliate links. See our privacy policy for more information.
Junior Editor

Daniel has been writing for AndroidHeadlines since 2016. As a Senior Staff Writer for the site, Daniel specializes in reviewing a diverse range of technology products and covering topics related to Chrome OS and Chromebooks. Daniel holds a Bachelor’s Degree in Software Engineering and has a background in Writing and Graphics Design that drives his passion for Android, Google products, the science behind the technology, and the direction it's heading. Contact him at [email protected]

View Comments