Google Assistant may be the only AI helper that isn't automatically spying on its users with human reviewers and the search giant wants to make that even better. That's based on a recent blog post from the company, detailing its practices, changes, and incoming updates for the service.
As things currently stand, Google indicates that it doesn't actually store user recordings taken from Assistant over the long term. In particular, it deletes data that's taken by accident when the AI mishears a wake word and activates. The data is already disassociated with an account when it's stored and users need to opt-in, rather than out.
Soon, the company says it will be automatically deleting all data collected within a few months of its collection. That will be regardless of whether Assistant was used deliberately or not.
Google will highlight and explain adjustments to how data is stored and collected in the app itself. The details will most likely be displayed alongside the company's usual encouragement for users to turn the setting on.
More poignantly, Google is turning the review feature off for every user with that incoming change. The company is including users who have already agreed to the collection and listening in that group. So users will need to reaffirm that they're okay with humans reviewing their clips before that resumes.
Everybody else is listening by default
Listening to user's clips and reviewing how the system responded is a common practice. Typically, its primary function is to improve the accuracy of the machine listening and to improve the system's responses.
Alexa and Facebook Portal users don't have quite the same luxury in terms of choice and privacy though. In fact, both other services are listening by default. That's a fact that Facebook only recently admitted, following backlash about its listening practices in its Messenger app.
Both Alexa and Facebook users do, of course, have options for opting out of the human-based review process. But neither has gone as far as Google has and neither is by default. That means that there's a significant portion of users who likely aren't even aware of the human-driven reviews. They would basically need to either read the fine print or stumble onto the appropriate settings to turn it off.
Arguably, offering users a choice to opt-out instead of an option to opt-in means that the review process can only just barely be called "optional" at all with Alexa or Portal.
Google Assistant will soon spy even less for those with the setting turned on
Google Assistant is, according to its creators, already designed not to review clips recorded by accident. In theory, that means that when users accidentally utter a phrase too similar to the wake words "Okay Google" or "Hey Google," the system recognizes that and doesn't send the clip to human reviewers. That almost certainly doesn't always work as planned though so Google is taking things a step further still.
There's no firm timeline set for related changes. But users will soon have the option to adjust the sensitivity of wake word detection. The tech giant hasn't outlined exactly what that means for end-users, so it isn't immediately clear what those will entail.
The controls' "sensitivity" designation seems to imply the AI will become better at detecting the appropriate phrase, to begin with. That should also mean that users can adjust how often speaking the phrase or similar phrases initiates interaction.
The company also says it only listens to around 0.2 percent of all audio snippets derived from queries and that those are never associated with an individual user account. That's another thing it plans to change for the better, moving forward. Google says it will add in more security to that process in the future. That will include another layer of privacy filters regarding which snippets are reviewed too.