Google Patent Guesses Facial Expressions Via Eye Tracking

A new patent from Google details a system that uses eye-tracking cameras and AI to ascertain a user's facial expression from eye imagery. Essentially, it works by seeing what a user's eyes look like when they're putting on a certain facial expression, then comparing that image data to the normal look of the user's eyes to build a profile of what that facial expression looks like. In this way, it's able to figure out what set of eyes belong to what expression, then output relevant data based on said expression. In the patent, the technology is supposed to be used in a head-mounted display. Though it could theoretically be applied in other devices, such as smartphones and laptops, the utility of such a function would be decidedly limited, since there would be less compute resource usage involved in simply observing the user's expression; such equipment does, after all, have a view of the user's full face at most times.

Background: The way it works is actually somewhat similar to voice printing for training Google Assistant, but with a visual component and repeated over multiple exposures. First, the user's eyes are photographed in a neutral expression. After that, the system will prompt a user to make a given expression. When that happens, another photograph is taken. These are compared, and machine learning technology is applied to note the differences. In this way, the program improves its knowledge of both what a user's normal face looks like and what a given expression looks like, speaking purely from the perspective of the user's eyes. This process is repeated, according to the patent, for all of the facial expressions that user wishes to have recognized. After a facial expression has been registered, the system continues to learn the nuances of how a user emotes by observing subtle differences in each occurrence of the given expression. This is further cross-referenced with other expressions and the user's neutral face to continually improve performance over time.

Impact: The obvious takeaway here is that Google wants people to be able to use their facial expressions in VR situations. You could potentially make a VR avatar reflect a user's expression, change in-app elements like text boxes based on a user's current expression, or use it in games, such as making an angry face to intimidate an NPC or smiling at a certain puzzle element to activate it. There are less obvious use cases, however, and lots of them. For starters, stepping away from the HMD angle, this tech could easily be used to enhance machines' understanding of a given face and how it moves, allowing AI programs and even robots to better mimic human facial expressions and mannerisms. It could also be used in certain consumer-facing applications that could be changed based on a user's mood, ascertained from their expression. For example, when a sad or angry user logs into a Chromebook, Assistant could activate and show them something funny, or tell them something good to change their mood.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now