The field of artificial intelligence is now advanced enough for AI solutions to be capable of tricking contemporary fingerprint readers in a relatively consistent manner, researchers from the Michigan State University and New York University Tandon wrote in a new cybersecurity paper. Called Latent Variable Evolution, the technique envisioned by the security experts is essentially a method of creating fake fingerprints that share enough common traits with a number of real ones to be capable of fooling simple biometric solutions into wrongly authenticating them. The entire philosophy behind the research is based on the notion that few fingerprints are truly unique and many are alike, especially in scenarios wherein the equipment being attacked has been programmed to accept even partial fingerprints for the purpose of added user convenience, i.e. faster authentication.
To test that hypothesis, the researchers created a realistic data set with a generative adversarial network, a popular solution in unsupervised machine learning projects. The project also involved AI-driven image data generation but only so that the spoofing service is able to differentiate between the two information pools. Ultimately, everything it learns is being sent to what's essentially a conventional computer key generator, only for fingerprints. The artificially created statistical model is then used as a basis for producing fake fingerprints capable of spoofing standard biometric solutions such as VeriFinger 9.0 SDK. Using a combination of both (simulated) inked fingerprints and readings from capacitive sensors the researchers ultimately managed to fool their test subjects 22.5-percent of the time. For added context, the same algorithms were tested with more legitimate samples not meant for spoofing and had a 0.1-percent rate of failure, i.e. false positives. The method was much more successful when based on actual fingerprint readings taken by sensors instead of images of inked fingerprints, though in this day and age, the latter may be more likely to be stolen than the former.
Background: Fingerprint readers have been a smartphone staple for years now, though manufacturers are often quick to highlight such biometric authentication isn't necessarily the safest option. In the field of biometrics, 3D cameras with depth-sensing capabilities that can profile one's entire face are a more secure alternative and traditional passwords are still the hardest to crack, provided users follow recommended password practices. The latest trend in the mobile industry comes in the form of in-display fingerprint readers, with first-generation examples of that technology being of the optical variety and hence even less secure and more prone to false positives than conventional scanners. Ultrasonic sensors are said to be the answer to those concerns but aren't expected to be commercialized before early 2019, with Samsung supposedly being among the first manufacturers to use them.
Impact: The newly published paper is a reminder that fingerprint sensors are far from a perfect authentication solution, especially in scenarios wherein manufacturers don't protect their data feeds. Naturally, the disclosed attack vector requires hackers to already obtain access to a given scanner's software input mechanism, meaning its practical uses may be severely limited by individual circumstances, though the arguably high success rate of the Latent Variable Evolution technique still makes the new research highly significant in the cybersecurity field.