Similar to last year, Google again is describing how it achieved Portrait mode on the Pixel 3 smartphones. Google says that Portrait Mode uses a neural network to determine what pixels correspond to people versus the background, and augments this two layer person segmentation mask with depth information derived from the PDAF pixels. This is meant to enable a depth-dependent blur. PDAF pixels capture two slightly different views of a scene. With Portrait Mode on the Pixel 3, Google says that it is fixing these errors by utilizing the fact that the parallax used by depth from stereo algorithms is only one of many depth cues present in images. Google says that it has built its own custom “Frankenphone” rig that contains five Pixel 3 phones, along with a Wi-Fi-based solution that allowed it to simultaneously capture pictures from all of the phones. With this rig, Google computed high-quality depth from photos by using structure from motion and multi-view stereo. However, even though the data captured from this rig is ideal, it is still extremely challenging to predict the absolute depth of objects in a scene a given PDAF pair can correspond to a range of different depth maps. To account for this, Google instead predict the ...
Read Here»
Subscribe to:
Post Comments (Atom)
Post a Comment Blogger Facebook
We welcome comments that add value to the discussion. We attempt to block comments that use offensive language or appear to be spam, and our editors frequently review the comments to ensure they are appropriate. As the comments are written and submitted by visitors of The Sheen Blog, they in no way represent the opinion of The Sheen Blog. Let's work together to keep the conversation civil.