NVIDIA Deploying Second-Gen DGX Supercomputer For Radiology

September 6, 2017 - Written By Daniel Fuller

NVIDIA has created a new iteration of its DGX supercomputer, custom-built for AI usage, and will be deploying it for use in medical applications, starting with radiology operations in the Massachusetts General Hospital’s Center for Clinical Data Science. The powerful new computer system features NVIDIA’s Volta architecture, and is purpose-built for AI applications. The new DGX-1 rig provides roughly three times the power of its predecessor, and can output up to the computational power of 800 average CPUs combined, for certain tasks. This custom build’s first duty will be in helping to analyze radiology samples and learn to determine when risk factors or diseases are present, then hand those samples over to human medical staff.

The goal here is to create an AI assistant that can integrate into a doctor’s workflow and perform initial screening on a mass scale. The initiative is already starting; data gathered by the Center for Clinical Data Science is beginning to be used by physicians in the Boston Area, and machine learning to train AI to spot abnormalities in a number of different types of medical data is well underway. In order to help with that goal, not only is NVIDIA working with the Center for Clinical Data Science to deploy DGX-1 systems in data centers and compute stacks, but the graphics card giant has created a personal workstation with about half the power of a DGX-1, called the DGX Station. This powerful workstation is capable of far more than the typical workstation tasks of CAD and data modeling; it can receive, analyze, interpret, and train on data in real time, and even run AI models on-device while performing other tasks, albeit at a much smaller scale and power level than non-personal equipment. These rigs will help developers and data scientists alike to further research and contribute to the software stack and training samples of DGX-1 clusters.

While the new supercomputer’s task may seem daunting, the sheer machine learning and neural networking power offered by the Volta architecture, and particularly this newest iteration of it, is comparable to custom-built units from Google. Each Volta core features 640 Tensor cores, and is capable of putting out around 100 teraflops of computing power per second. Splitting tasks up between all of those cores, with hyper-threading and virtual cores enabled, it’s not hard to imagine virtual neural networks made up of tens of thousands of units, each with somewhere close to the power of the average consumer PC.