An international team of artificial intelligence engineers and other experts related to the field of consumer-grade AI and home automation is regularly listening in on people’s conversations with Alexa, according to a new report citing people familiar with the matter.
The practice that some will understandably say raises a number of privacy concerns is part of the company’s process of improving existing Alexa services and creating entirely new ones. For example, if a certain sound or combination thereof has a high failure rate, it’s not uncommon for numerous employees to start personally listening to those attempts so as to determine whether the assistant needs a new capability or simply have something like a music album name or actor name manually transcribed for easier recognition in the future.
It’s understood the practice has been part of the Alexa unit’s quality assurance processes from day one. Amazon never publicly confirmed it has people listening in on one’s Alexa conversations until now but has discreetly obtained user permission to do so, burying the clause deep inside the terms of usage associated with its AI-powered voice assistant. The firm didn’t make the permission a prerequisite for using Alexa in any shape on form; instead, the human verification aspect of its R&D and QA activities is carried out within the scope of an optional Alexa initiative users can opt out of. Employees from India, Costa Rica, the U.S., and a number of other countries are part of the team in charge of inspecting user interactions with the digital helper.
Following the publication of the report detailing these practices, Amazon issued a statement that largely confirmed its accuracy but noted the sample of user interactions with Alexa that pass human verification is “extremely small.” The Seattle, Washington-based company acknowledged the practice can raise a variety of concerns, asserting it takes the privacy of its customers seriously. Whether that means it won’t be expanding the program in the future is unclear but for now, Amazon insists the system it has in place is excellent at preventing any kind of abuse that could compromise Alexa users.
The employees in charge of verifying Alexa interactions don’t have direct access to data that could help them identify users whose commands they’re listening to, the company said. Ultimately, Alexa still only sends commands to Amazon servers if it hears its trigger word, as Amazon often points out, though it’s not like the system is perfect. In fact, cases wherein Alexa users ended up being appalled and creeped out by the voice assistant’s behavior in relation to their privacy aren’t that uncommon and continue to emerge to this date, raising questions about how fool-proof Amazon’s human verification settrulyrul is.
The manner wherein the tech giant obtained user permission for human verification could also soon be deemed illegal in the United States. A bipartisan bill introduced in the Senate earlier this week seeks to curb precisely this type of behavior which allowed many Internet giants to harvest data en masse, without any real checks or limitations that would protect consumers and their privacy.