Voice Assistants Can Be Hacked With An Inaudible Ultrasound

September 11, 2017 - Written By Daniel Golightly

Security for the world’s technology seems to be getting better all the time but researchers out of China’s Zhejiang University have now discovered a new exploit that works across no fewer than seven digital voice assistant systems. The researchers call their proof of concept DolphinAttack because it uses high sound frequencies that are outside of the range of human hearing – in the ultrasound range – but which the A.I. voice assistants easily pick up and respond to. Although an attack of this type is fairly limited, it could feasibly be used in a combination with other exploits to much greater effect.

The researchers released a video showing what DolphinAttack can do to YouTube in late August, and the results are pretty impressive. It is important to remember that, although the video shows the team using an iPhone’s Siri voice assistant, the exploit has been tested across Google’s Assistant and Amazon’s Alexa, as well as Microsoft’s Cortana. Furthermore, the team has also tested it in some automaker systems, such as the one made by Audi. Ultimately, what makes the exploit dangerous is that it can be used, as shown in the attached video, even with the device screen turned off and locked. Moreover, as voice command systems become smarter and more deeply interwoven with device control, the range of what is possible through inaudible, ultrasonic-based attacks becomes much wider.

With that said, the limitations of this attack vector are also clear. To begin with, sounds in the ultrasound range are going to be muffled out or disrupted by other sounds more easily at greater distances. It will also be exceptionally difficult for the voice assistant to become activated without alerting the user in most cases, thanks to default auditory prompt settings built into many of the assistants in question. Beyond that, DolphinAttack is actually fairly limited in what it can actually access while a device is locked. Phone calls can be made, messages sent, and other similar voice commands taken, but applications holding bank information and other sensitive data are generally kept safe by passcodes, pins, and other safety mechanisms that require a user’s direct input. Beyond that, some systems such as Google Assistant allow users to set up more accurate voice recognition features like Trusted Voice that could add another layer of protection, though not always perfectly.