Google Details Pixel 2's Fused Virtual Stabilization Tech

Google detailed how it made use of both optical and electronic image stabilization to fix a number of key issues that people usually encounter when they use smartphone cameras and implemented its solutions into the Pixel 2 Android flagship series. Among the problems that the search giant wanted to resolve are camera shake, motion blur, rolling shutter, and focus breathing effects. Motion blur takes place when either the subject or the camera moves during exposure, while focus breathing occurs when objects on the screen are positioned at varying focus distances from the camera. Rolling shutter distortion, on the other hand, is a result of how CMOS image sensors collect image information from the pixels. These sensors usually gather information one row of pixels at a time, leading to distorted images if the subject or the camera moves too quickly. A number of algorithm and hardware teams worked together in order to resolve these issues, having ultimately developed a technique called fused video stabilization.

Fused video stabilization supported by the Google Pixel 2 and Pixel 2 XL makes use of machine learning and information gathered from both the OIS mechanism and gyroscope. The first step involves motion analysis using hand motion data from the gyroscope and additional information on the movement of the lens, and together, these are used to reduce rolling shutter effects in videos. The information about the real camera motion, which is determined by motion analysis, is used for lookahead motion filtering and frame synthesis. Lookahead motion filtering utilizes machine learning to guess the next movement of the user based on previous and future real camera movements. For example, if the algorithm predicts that the user will be panning horizontally, it will start rejecting any vertical motion input. The frames of the video are then altered based on the real camera motion from motion analysis and virtual camera motion from the lookahead motion filtering.

Google still encountered scenarios in which OIS and EIS combined cannot compensate for motion blur, especially in cases involving large movements. In these situations, the search giant attempted to mask motion blur by using machine learning to reduce variations in sharpness, a visual effect that people watching videos usually find distracting.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author

Mark Real

Staff Writer
Mark Real has written for Androidheadlines since 2017 and is a Staff Writer for the site. Mark has a background in sciences and education. He is passionate about advancements on hardware and software technologies and its impact on people’s lives. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now