Google Assistant Becoming An Interpreter But Not Where It Matters The Most – CES 2019

Google Assistant Interpreter Official Render CES 2019

Google Assistant is becoming an interpreter, with Alphabet’s subsidiary now planning to infuse its artificial intelligence companion with the ability to translate spoken foreign languages in real time. The new feature is officially called the Interpreter Mode and will be rolling out to eligible devices in the coming weeks, Google said Tuesday as part of the first day of Consumer Electronics Show 2019 which just opened its doors to the general public in Las Vegas, Nevada.

The catch is that this promising feature won’t be coming to devices that are most likely to be on hand while you’re trying to have a conversation with someone who doesn’t speak your language. In other words, the Interpreter Mode isn’t planned to be released on smartphones — Android and otherwise — in the immediate future. Google’s established AI development practices suggest the functionality will eventually be making a jump to mobile but given how the firm failed to even acknowledge contemporary smartphones as a potential platform for the Google Assistant Interpreter Mode, that transition isn’t expected to happen anytime soon.

Easy to understand, unlike foreign languages


For the time being, the company’s rollout will only encompass Google Home devices and smart displays offering support for its voice assistant. The functionality will initially span 27 languages and be relatively straightforward to activate; to do so, simply say “Hey Google, be my Spanish interpreter” or ask for its help understanding a different language while within the earshot of an eligible device. Audio-only gadgets such as the Google Home Mini will naturally only provide you with audio feedback, whereas smart displays like the Google Home Hub will also deliver written translations of whatever it is you’re trying to make sense of. In practice, the service listens for an individual sentence, translates it, then waits for another one until it’s told to stop.

The dream of doing the impossible continues

While details on how the technology actually works remain slim, its core is likely at least partially based on the real-time translation functionality of Google’s Pixel Buds which has been available to consumers for over two years now. The service was initially only supported by the first two generations of the Pixel handsets but eventually became compatible with every contemporary Android device when paired with the Pixel Buds. The main issue lied in the fact that it was extremely unpredictable in practice and remains suffering from those inconsistency problems to this date. Unlike that hardware-dependent solution, the Google Assistant Interpreter Mode is said to be significantly more reliable.


The idea of pairing AI with word databases in order to attempt bringing down various language barriers between people is far from novel. Microsoft has been aggressively pursuing it for many years now and Google is no different, though many industry watchers remain skeptical about how feasible the idea is in the foreseeable future. A single language contains countless idioms, figures of speech, nuances, and exceptions to usual rules, all of which are extremely difficult to account for if a consistent, human-like translation experience is what a developer is going for. Even the act of adding robust machine learning into the mix doesn’t come with any guarantees if such translations are meant to be achieved in real time.

On the other hand, trying to achieve the seemingly impossible has never been something Google shied away from in the past, especially when it comes to AI technologies, as evidenced by innovative and even controversial services such as Duplex. The main issue with the newly announced solution is that it won’t be making its way to the only computer most people carry on them at all times – the smartphone. While that may be Google’s long-term goal, by limiting the Interpreter Mode to wired smart home gadgets, the company is also crippling its ability to rapidly improve the technology in the near future seeing how its machine learning backend should make it better at fulfilling its purpose the more it’s used, at least in theory. Google‘s reluctance to deploy the functionality to some 100 million Android handsets currently active in the world may also be indicative of the company’s skepticism about how ready the solution actually is for widespread use. More details on the interpreter mode may be announced as early as next month when the Mountain View, California-based firm is scheduled to appear at Mobile World Congress 2019 in Barcelona, Spain.