Google’s Night Sight on the Pixel 3 shows promise of computational photography

November 25th, 2018 | by Wilson Wong
Google’s Night Sight on the Pixel 3 shows promise of computational photography

The Pixel 3’s single-lens camera. PHOTO: Wilson Wong

With many flagship smartphones sporting two or more lenses in their main cameras, it is refreshing to see Google sticking to its gun with just one camera for its Pixel 3 and Pixel 3 XL.

On paper, you might wonder a single camera can outdo the performance of multi-camera systems, especially for night scenes.

Google’s answer to that is Night Sight, a feature that came out after its latest Pixel phones went so sale. After testing it out on a Pixel 3 for a while, I can say that the software is just as important as the hardware.

Through computational photography, Google is able to drastically improve the images from its Pixel phones. The key here is making use of algorithms to enhance selective parts of the image.

This means more light where more is needed. Less where it is not. Plus, other enhancements that turn the image you just took into a much nicer version without the time spent to touching it up on Photoshop later.

Switch on Night Sight, hand hold it for 2 to 3 seconds and the smartphone will do the rest. The feature is impressive. More so it is done with only one sensor. PHOTO: Wilson Wong

The Night Sight method takes multiple frames of the same scene at different exposures in a very short period of time, and through machine learning, merges these frames together to produce an image that has the widest range of brightness.

In other words, we should see the details in both the brightest and darkest parts of the image. That solves one of the most common headaches with photography – getting the light right throughout a frame.

From my various tests, the images taken with the smartphone were certainly impressive. You may not need a tripod now, as it can sense movement in the frames the camera has taken and extract it to produce a shake-free image. Going tripod-less has certainly made it easier for me to shoot in low-light conditions.

Even though there are slight movements of people milling about, the Pixel 3 took into consideration that this was a hand-held shot too. So I was able to take very bright shots of the exhibits in the dark chambers of the National Museum.

The museum’s interior is pretty dim and would usually require the use of a tripod but the Google Night Sight allows not only for low-light photography, but eliminates movement as well. PHOTO: Wilson Wong

Of course, there is no miracle pill, despite Google’s expertise here. In certain cases, the colour displayed isn’t as accurate, especially in images with the subject being lit by post-shot illumination.

This handheld shot of the National Museum of Singapore is sharp but the colour of the lights have turned pinkish which is not how it looked like. PHOTO: Wilson Wong

I have made the same attempt but this time using the Huawei Mate 20 Pro as a comparison. Even though computational photography has made strides in shooting in difficult conditions, getting the colour right is also a priority. PHOTO: Wilson Wong

Even though some might argue that computational photography will lead to the loss of “traditional” photography methods, there’s no denying that it is has made photography easier. Users can focus on capturing life’s important moments.

What I have seen so far in the Pixel 3 has been impressive. And computational photography will only get better with faster processors in the new smartphones coming our way.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.