Amazon is developing an artificial intelligence chip meant to speed up its future Alexa-enabled devices, The Information reported Monday, citing sources with knowledge of the effort. The Seattle, Washington-based e-commerce giant's first piece of silicon is understood to be aimed at placing a larger emphasis on offline processing so as to reduce Alexa's reliance on the cloud, consequently shortening its response times and making interactions with it more intuitive and lifelike, i.e. more similar to talking to a human being. If completed, the device is expected to be implemented into the company's future additions to the Echo lineup of smart speakers, though it's currently unclear whether Amazon would be prepared to make the chip available to the growing number of third-party manufacturers interested in making Alexa-enabled devices.
The project is understood to be spearheaded by Annapurna Labs, an Israeli semiconductor company acquired by Amazon in 2015 for $350 million. The tech giant has yet to share any details on Annapurna's post-acquisition endeavors in an official capacity. Two years ago, the Israeli firm said it's working on a chip lineup called Alpine designed for various devices belonging to the Internet of Things segment, including smart speakers, though the new report doesn't clarify whether the chip meant to be integrated into Amazon's future devices will be part of the Alpine series. The exact scope of its capabilities is also unclear, with insiders suggesting a number of complex tasks such as music playback would still be handled entirely in the cloud and hence continue featuring a small delay between the moment they receive a command and the one in which they comply with it. The in-house silicon would instead be aimed at performing more straightforward tasks like reciting dates of national holidays or telling the time, according to the same report.
The company's existing operations amassed hundreds of chip experts in recent times, according to some industry trackers. The Amazon Web Services division that largely operates as a separate unit has also been hiring chipmaking veterans in the last several years, suggesting it may be working on its answer to Google's Tensor Processing Unit used by Google Cloud. The latest report is in line with recent predictions that IoT devices such as smart speakers are likely to become significantly more expensive in the coming years as their growing power requirements push manufacturers into equipping them with more capable hardware, thus making them less reliant on cloud processing but also increasing their production costs which will be passed on to consumers.