For generations now, Google has led the pack in camera quality and overall imaging excellence with its Nexus and Pixel phone lineups. The introduction of HDR+ on the Nexus 5 ushered in a brand new concept for mobile photography, one that worked on a per-pixel level instead of a multi-frame exposure level. Over the years Google has honed and perfected this technology, bringing about results that have been stunning, typically eclipsing all other phones on the market in terms of sheer detail and the wide dynamic range available. Google has specialized itself in the realm of movement, where it uses intelligent algorithms to analyze several frames of shots taken, identifying elements within each frame and then fusing them all together to create one amazing “super photo.”
This year Google is utilizing this functionality to even enhance the zoom detail range of its single 12.2-megapixel sensor on the back of both the Pixel 3 and Pixel 3 XL. Since there’s no secondary telephoto camera on the Pixel 3, Google has implemented a way to enhance zoom detail with a modification of the existing HDR+ algorithm, which now focuses on adding detail to shots by comparing multiple photos taken at the same time and extrapolating details from the scene with machine learning techniques. This is certainly needed since there’s no optical zooming on the phone, and in results shows a marked increase in quality in many different lighting conditions over the standard crop method used by most cameras when digitally zooming into a picture. Night Sight mode promises to enhance low light functionality considerably over the existing method but isn’t available just yet. What’s nice about this mode, in particular, is that the enhancements brought to the table will apply to all Pixel phones, not just the Pixel 3 family.
Google has also added in a number of new modes to enhance picture-taking accuracy, again focusing on movement and enhancing poor lighting conditions. Top Shot works through the existing “motion photos” functionality in the camera, but instead of just delivering a fairly useless 2-second burst of motion with a photo, Google actually allows users to extrapolate images from this burst. This will undoubtedly save many moments from being lost, as the shutter timing isn’t always the best shot available. There’s a significant loss in quality when using the pictures from the burst shot, but it’s still better than missing the shot entirely, which is, of course, the alternative.
There’s also a new Photobooth mode for the front-facing camera, which takes pictures automatically as it detects smiles, working together with that new wide-angle front-facing camera to help group shots turn out better than ever. This new pair of front-facing cameras are both the same 8-megapixel sensor, but one sits behind an f/1.8 75-degree angle lens, while the other is beyond a wide 97-degree angle f/2.2 lens. Google is still producing the best front-facing photos in the industry, and this new wide-angle lens helps add more faces and places into the scene than ever before. Check out the results in our video below, and don’t forget to subscribe to us for the latest as it happens!