Google tends to put a big focus on innovation. For the most part, projects are in the works for a number of years before consumers can get their hands on them or, in some cases, even hear about them. While there have been some exceptions over the years, Google’s recent I/O developer conference was absolutely chock-full of extremely early tech that consumers will be able to get their hands on long before they reach their prime. The reason for this is that most of the tech on show uses Google’s advanced artificial intelligence and machine learning systems in the background. While Google’s advanced A.I. tech is not in its early days, its integration into brand new systems, like the Allo chat app, requires the A.I. to not only learn its new environment, but to learn users. This is a use that Google’s A.I. and machine learning tech is only beginning to see, so it will take some time to adapt.
The phrase “early days” was thrown around by just about every Google employee at the conference, especially the ones who hit the stage. Used to describe everything from Google’s new Daydream VR system, with its rich support for multiple powerful ecosystems like Unreal Engine and Unity, to the new Assistant, set to compete on level with the likes of Amazon Alexa, in due time. Despite the tech being worked on for years now, this term was also used to describe Project Ara, Google’s modular smartphone project that many fans have been waiting on for quite some time.
The director of Google Now, Aparna Chennapragada, took to the stage to explain this phrasing. “If you get it wrong, there’s a high cost to the user,” Chennapragada said of Google’s nascent products. Essentially, the phrasing is about expectation management. If customers buy a Daydream setup expecting a full, rich VR experience with all of the trimmings right out of the box, there’s a good chance that disappointment will mar people’s opinion of Google and the product itself. The same goes for Assistant, Allo and other A.I.-based tech. While Google may have been working on A.I., machine learning and neural networks for years, they’ve mostly been used behind the scenes up until now. Google is putting the controls into users’ hands with their new products, and to say that it’s “early days” for that approach would be accurate indeed.