Similar to last year, Google again is describing how it achieved Portrait mode on the Pixel 3 smartphones. Google says that Portrait Mode uses a neural network to determine what pixels correspond to people versus the background, and augments this two layer person segmentation mask with depth information derived from the PDAF pixels. This is meant to enable a depth-dependent blur.
PDAF pixels capture two slightly different views of a scene. With Portrait Mode on the Pixel 3, Google says that it is fixing these errors by utilizing the fact that the parallax used by depth from stereo algorithms is only one of many depth cues present in images. Google says that it has built its own custom “Frankenphone” rig that contains five Pixel 3 phones, along with a Wi-Fi-based solution that allowed it to simultaneously capture pictures from all of the phones. With this rig, Google computed high-quality depth from photos by using structure from motion and multi-view stereo.
However, even though the data captured from this rig is ideal, it is still extremely challenging to predict the absolute depth of objects in a scene a given PDAF pair can correspond to a range of different depth maps. To account for this, Google instead predict the relative depths of objects in the scene, which is sufficient for producing pleasing Portrait Mode results.
This ML-based depth estimation needs to run fast on the Pixel 3, so that users don’t have to wait too long for their Portrait Mode shots. To get good depth estimates that makes use of subtle defocus and parallax cues, Google has to feed full resolution, multi-megapixel PDAF images into the network.
To ensure fast results, it uses the TensorFlow Lite, a cross-platform solution for running machine learning models on mobile and embedded devices and the Pixel 3’s powerful GPU to compute depth quickly despite our abnormally large inputs. It then combines the resulting depth estimates with masks from our person segmentation neural network to produce beautiful Portrait Mode results.
In Google Camera App version 6.1 and later, Google depth maps are embedded in Portrait Mode images. Meaning, you can use the Google Photos depth editor to change the amount of blur and the focus point after capture.