Qualcomm created a solution largely in search of a problem when they optimized the Snapdragon 835 to run TensorFlow applications natively and efficiently on-device, and now Google, has created that problem in the form of TensorFlow Lite. TensorFlow Lite integrates directly with the full-size TensorFlow stack, allowing the exact same code and procedures to be run. While training a model is far from feasible given the immense power gap between mobile devices and things like Google's newest Tensor Processing Units, running almost any other code is on the table. Essentially, this means that a machine learning application could be trained on a traditional platform or via Google Cloud Platform, then simply plopped in-place into an Android app, and it will run without a hitch so long as it's built on TensorFlow.
The Google Cloud Platform and TensorFlow are extensible to almost any project and platform. This means that today's move opens up entirely new sorts of machine learning scenarios to developers of all spades, and thanks to Google Cloud Platform's competitive pricing and lack of upfront cost, amateur developers and researchers can play along for fairly cheap. The Snapdragon 835 is the perfect debut chip for the platform, on the consumer side; it's incredibly powerful, and promises to be ubiquitous, already appearing in a number of upcoming and recent flagships like the Samsung Galaxy S8 and Lenovo Moto Z2.
The implications here are massive; it's hard to understate just how much of an impact this will have on machine learning and neural networking. A universally compatible, easily accessible, and easy to use platform is now completely integrated with one of the most powerful and ubiquitous mobile chips out there. This means that scenarios like using app downloaders' smartphones as nodes in a neural network over 5G, running machine learning code on a user's device to allow seamless performance, and perhaps even testing and controlling machine learning applications from a mobile device through the Google Cloud Platform are all on the table. This means that it will be easier than ever to develop and fine-tune machine learning applications aimed at the mobile space, which in turn means more development in machine learning, along with bringing the power of onboard machine learning to the mobile arena, and quite possibly allowing for unprecedented use cases in doing so.