Google's updated Android keyboard, Gboard, uses a heavy dose of machine learning to do its magic, and Google has taken to its research blog to show the internet at large some of what goes on behind the scenes. The main thing going on behind the scenes is a trained model working with your data and that of millions of other users, as well as models it was trained on, to figure out what somebody may be trying to type based on a word flow gesture or typing that may be less than precise. The way that the model for gesture typing is laid out is like an acoustic model, while the model for correcting errors in users trying to type traditionally and messing up is a bit more like a normal neural networking model.
The model for gesture typing is an interesting one indeed, manifesting as a series of strings representing almost all possible combinations of letters in a word flow. The magic happens when the keyboard is fed linguistic sampling data, which shows it things like common mistakes, what the next word may be based on context, and how to judge the likeliness of a user typing a given word. This data set is added to on the device end when users add new words to their personal dictionaries or swipe out words in an unconventional fashion and then correct the keyboard when it spits out a word they didn't want. This is all interspersed with data from various languages and dialects to make the machine just as good at guessing German input as it is with English, for example.
The traditional typing model is somewhat like the one for gesture typing, but the key difference is a lack of lines and a bigger focus on words and context. This model has to cope with "fat fingers" errors and misspellings, so its dictionary and sample set reflect that. It incorporates a number of common typos that can be made on a mobile device, and uses that alongside traditional linguistic and context data to figure out the difference between two users who wanted to type owner or one, even when both of them typed in 'onr', as one example. Interestingly, an error-based probability model was used in the past, but Google recently updated the model to one that's more focused on finding things in common between data sets.