Amazon's Alexa research team has developed complex voice recognition models that work offline, it seems. The company has published all the necessary info on its official website, and this seems to be a big deal when it comes to usability of voice assistants, as pretty much every voice assistant performs speech recognition in the cloud, well, for the most part at least, as really powerful servers are needed for something like that to take place.
The Alexa Machine Learning team has managed to develop navigation, temperature control, and music playback algorithms which can process speech-recognition locally, without the need for an Internet connection. The Alexa Machine Learning team will present the results of its research at this year's Interspeech machine learning conference in India, though some info was given now, of course. That very same research team went on to explain that natural language processing models have significant memory footprints, at least in most instances, while Alexa's Skills (third-party apps which extend Alexa's functionality) are loaded on demand, which creates latency to voice recognition as a result.
In order to amend such issues, and a number of others, the Alexa Machine Learning team opted for a two-way solution, parameter quantization, and perfect feature hashing. Quantization is basically a process of converting a continuous range of values into a finite range of discrete values, explains Alexa's Machine Learning team, while hash functions presented a problem for them. Hash functions tend to result in collision, or in some instances related values that don't map to the same coarse location in the list of caches, and that can result in quite a few errors along the way. The company's research team opted for perfect hashing in order to amend such issues, as perfect hashing maps a specific number of data items to the same number of memory slots, so it avoids any issues along the way.
As a result of everything, they managed to reduce memory usage 14-fold, which is, needless to say, a lot. Interestingly enough, Amazon's Alexa Machine Learning team claims that such reduction did not affect accuracy at all, as the offline algorithms performed almost identically to the regular ones.