Android Q Beta 3 Brings AI-Infused Suggested Actions – Google I/O 2019

Google Pixel 3 Android Q Beta AH NS 10

Google faithful are probably familiar with the Smart Replies that have been in Gmail and Messages for a while, and Android Q Beta 3 brings an API that allows developers of third-party apps to utilize a similar trick called Suggested Actions.

This new trick does exactly what it says on the tin – developers who tap into the API can attach Suggested Actions to notifications, suiting them to the situation that the user has found themselves in.

This can include any number of things, from jumping into a certain screen in a game to manage resources, to replying to a direct message on a social platform, and just about anything else a developer can imagine.


One of the coolest things about this feature, though, is the fact that developers don’t necessarily have to define every possible action and reply by hand. On supporting devices, developers can tap into the same text and context recognition AI tech that powers Google’s Smart Replies feature to help narrow down the list of ways a user may want to react to a given notification. This all happens on-device, rather than in a backend server.

Developers wanting to implement the feature have a number of ways to do so, and it’s not all that hard if your app already uses Google’s MLKit, a toolset built for machine learning, onboard or otherwise. Conversely, MLKit is easy to implement if your app already uses Android’s built-in notification API.

Finally, developers wanting to use the feature can actively choose which types of notifications trigger MLKit, and which notifications get a list of Suggested Actions that the developer pre-defines. They can even mix and match the two. The whole thing runs on Google’s TextClassifier service, as mentioned above, making it a snap to work with.


The possibilities of this API change are nigh endless, suffice it to say that users will have a much easier time reacting to notifications and interacting with apps. Developers, meanwhile, will have an easy way to build more engaging notifications that will more often catch a user’s interest or give them a chance to do something that jives with how they use a particular app.

This makes it a big win all around, and the onboard AI backend means that the feature will continue to get smarter on a per-user basis over time, learning what each user likes to do, when and where they normally bother reacting to notifications, and more.

Looking to the future, this feature is only going to grow in scale and scope as it learns more about each user, and as developers explore it and Google expands it. Not to mention the fact that chipsets and their onboard AI coprocessors, along with AI programs themselves, are all going to be improving over time.


Just how that improvement is going to go in the near future is up for debate even among experts, though it could easily hit exponential levels if there are any significant breakthroughs. With so many different AI research and development methods out there, it’s inevitable that one or more of them is eventually going to set things up for AI to build themselves and each other up, leading, theoretically, to infinite development.