NVIDIA Co-Develops New A.I. Method To Remove Noise In Photos

NVIDIA has announced a new machine learning method for teaching A.I. to remove smooth out noise in imagery, co-developed with Aalto University and MIT. However, unlike similar achievements using the technology, the new approach doesn't require the A.I. to observe clean photographs. Instead, it can be taught to infer what a given image should look like without graininess or missing pixels without being shown a reference image. In other words, it learns solely by looking at images that already contain a high level of noise. What's more, it uses nearly an identical process to what's being used by those other programs to accomplish the task. That's significant because in many cases it means a reduced training time. NVIDIA also says it often performs at least as well as other methods which require training to implement distortion-free imagery.

The implications, of course, would primarily impact end users who may just want to clean up older photos or those taken in less-than-ideal lighting conditions. It could eventually see use in smartphone cameras that don't necessarily feature top-of-the-line hardware or software optimizations. Graininess and artifacts are arguably the most common complaints about budget handsets. There will almost certainly be additional use cases that haven't been dreamed up yet but the process may be useful in fields related to security monitoring as well. That's because the group of researchers utilized the Google Brain-developed Tensorflow framework. That's an A.I. tool that is mostly used via the cloud, making this a solution that could feasibly work in connected mobile devices. The learning process itself involved more than 50,000 images from the ImageNet validation set. Beyond that, medical imagery could eventually be greatly improved using A.I. in this way, according to NVIDIA.

The current hardware being used also sets present limitations. NVIDIA supplied the underpinnings for this run of experiments using cuDNN acceleration on top of its NVIDIA Tesla P100 GPUs. Those are server-level graphics cards and not at all a good choice for a smartphone or other mobile device. Setting those things aside, the researchers will be showcasing the new method in Sweden this week for the International Conference on Machine Learning 2018.

You May Like These
More Like This:
About the Author
2018/10/Daniel-Golightly-2018-New.jpg

Daniel Golightly

Senior Staff Writer
Daniel has been writing for AndroidHeadlines since 2016. As a Senior Staff Writer for the site, Daniel specializes in reviewing a diverse range of technology products and covering topics related to Chrome OS and Chromebooks. Daniel holds a Bachelor’s Degree in Software Engineering and has a background in Writing and Graphics Design that drives his passion for Android, Google products, the science behind the technology, and the direction it's heading. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now