Debate on computational photography misses what’s real, what’s lived outside a frame

Esmond Xu
Esmond Xu
7 Min Read
This wasn't shot on a camera. ILLUSTRATION: beate bachmann from Pixabay.

Using artificial intelligence (AI) and “computational photography mad science” to improve smartphones images captured with diminutive imaging sensors (relative to standalone cameras, at least) is old news.

Is it a crime, though, if the pictorial output turns out better-than-life through the addition of detail, using trained AI to improve raw camera inputs with less-than-perfect image data?

This is one burning issue a discussion thread on Reddit sought to address earlier this week, as user ibreakphotos purported to debunk what he believed were misleading claims on the astro-photographical capabilities of Samsung’s latest Galaxy S23 Ultra flagship.

He downloaded an image of a moon, reduced its resolution and applied a blur filter, then displayed the end product full-screen on his monitor. He turned off the lights in the room, and used a trusty Galaxy to snap the screen-moon.

The Galaxy’s output image contained more detail than the (edited) source moon-shot.

This is not the first time Samsung is embroiled in such controversy. When the Samsung Galaxy S21 Ultra landed two years ago, its “100x Space Zoom” was similarly subject to intense scrutiny.

Neither is Samsung alone in shooting for the literal skies in camera capabilities. Back in 2019, Huawei’s “Moon Mode” on its P30 Pro was the topic of a strenuous expose by a user on Chinese social knowledge discussion platform Zhihu.

Additive versus subtractive reality

To be sure, “better-than-life” edits are not new. Smartphones have had Beauty Modes for the longest time. With maximum beautification, pesky recurrent pimple(s) are forcibly evicted from my face, and my jaws are so chiselled I use them to injure whoever teased my double chin.

It is laughably difficult to accept the product as true-to-life, but there were no invested 3,500-word Reddit and Zhihu theses on such a feature.

Night Mode is another example. Google’s Night Sight can take a scene that is practically darkness to the naked eye, and turn it into a semi-decent shot. It may be the result of splicing ten shots into one, but the output is effectively different from reality, whatever reality meant.

This was shot on the Galaxy S23 Ultra at night, but phone has made the scene much brighter (and clearer), while the patterns on the cats’ fur are more pronounced. PHOTO: Esmond Xu

Perhaps the reason why doctored moons repeatedly captivate the exposition of makers and consumers alike, is because it adds detail we struggle to perceive.

In other words, semi-mysteries far out in the cosmos that only vaguely register with humanity’s best gear. That’s different from removing perceptibly salient details that live rent-free on our faces and my flabby self.

The real question is, should additive manipulation in pursuit of better-than-life output be vilified over subtractive work? Maybe, it is just human to feel more of a fraud when we create detail than omit?

Reality through colour(-corrected) lens

I digested the images and discussions myself, with a focus on the alignment of spots and lines on the edited “raw” and processed e-moons. I did not see new details, just poorer ones made clear, albeit clearer than source.

I am prepared to believe the computational photography algorithm compared every lacklustre detail in the raw image against its pin-sharp AI library, and made specific exposure, sharpening, contrast and other editing decisions on every of them.

The aim, likely, is for the processed image to convey as similar a feel to the trained copy as possible.

It appears alright for our faces to look less round, and the colour and brightness palette of a shot automagically improved so we look fly in a dark room (especially when the snapper is like me, too amateur to improve things pre-shot).

Surely, then, it is fine the moon is sharper after an exacting sampling exercise, on every vague detail captured against a dataset trained with one million other moons?

To me, this invites a deeper conversation about the modern-day pursuit of what is real, and the lived reality through our devices.

Reality through a viewfinder

Think back to the last holiday, concert, or fireworks event that you attended. Did you experience much of the occasion through the pixels of your smartphone?

Wonderful, if it is not so. With social media platforms and apps prevalent and a decent camera in our pockets, it is easy to want to keep our networks updated on our lives. Especially the best and nice bits, when filters that can airbrush away blemishes are a click away.

The readiness and maturity of the tools available further condition the desire to capture every pleasant memory for posterity.

Our reality increasingly becomes one mediated by a screen, the best moments paradoxically lived vicariously through viewfinders, any noise filtered into oblivion.

Living in “reality

The realisation has driven me to consciously remind myself to live away from the screen.

When I watch fireworks now, for instance, I snap a few pictures, maybe a 10-second video, then set the phone aside.

I take in how the entire spectacle can fill up the far corners of my line of sight. I smell the spent fireworks after they light up the sky, feel the smog make my eyes smart. I turn, and study the faces of those enjoying the moment, ironically, mostly through their device viewfinders.

Instead of getting drawn into debating intricacies of defining realit(ies) composed with camera lens(es), nothing beats the richness of living and enjoying the moment with your chosen company, every blemish left as it is, no astro-magnification necessary.

Share This Article
Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.