Google is now updating its AI Assistant to make it better at understanding the context of queries and the pronunciations of names. That’s based on recent reports following the update, shared alongside some useful videos highlighting the changes.
Beyond context changes, the biggest change — at least for those users who use Google Assistant to make calls — is the ability to teach the AI pronunciations. Specifically, that’s pronunciations for contacts stored in association with a Google account. So the setting can be found by navigating to the Google app — or another Google app that has account settings in the settings menu — and then to the “More” option.
From there, Users need to navigate to “Settings,” then “Google Assistant,” and then “Your People.”
Now, Google features options to select the “Default” pronunciation or to dictate and record one. And then the system will remember that pronunciation across Google Assistant iterations such as Nest smart speakers going forward.
Beyond pronunciations, what’s new with Google Assistant?
The other changes included with this update are likely to be less obvious to users. But they will nonetheless improve the overall experience.
To begin with, Google Assistant is getting “better” at differentiating what users are talking about when they make requests. Summarily, Google says that it can now “respond nearly 100 percent accurately” to some requests. For instance, those associated with timers and alarms.
That’s especially true with regard to situations where multiple alarms or timers are set. The system will now be able to more readily discern between those. Effectively giving more accurate responses and making changes more accurately if those are needed.
Language processing in general is improving too, with regard to contextual understanding and follow-up questions. And Google Assistant can now also take contextual cues from what’s shown on users’ displays. That’s for both smart displays and smartphone displays. All of which is geared toward making Google Assistant even more conversational than ever before.
This is rolling out now, if you’re in the right region
Of course, as with most Google Assistant features, these changes are rolling out in the US first. That’s starting with changes to improve contextual awareness. Changes allowing pronunciation dictation will be added over the next several days. But it could take some time for every user to see the alterations.
The update doesn’t appear to require any specific version, however. So the implication is that this is happening server-side and users shouldn’t need to do anything in particular to take advantage of it.