Does anyone else feel like the Decisive Moment is finally dead?
Before I go on a rant about photography, please know I have limited knowledge on the technical side of the subject of computational photography. I’m not going to begin to pretend I know everything about it. I would much prefer speaking on the philosophical side of it. Which is what I’m going to do.
Photography has taken up much of my life, I studied it in undergrad, and I have years of experience working inside the industry. I’ve also been a somewhat tech-savvy bystander in the current world, and Computational photography worries me almost just as much as AI does.
Computational photography in the sense that I’m discussing, is when your phone edits your photos for you. It has its roots in simple HDR (high dynamic range) processing which is when a handful of pictures with different exposures are digitally merged together to create an evenly exposed image and has been used ever since digital photography became commonplace. Now, our phones can take many more pictures and run much more advanced software before it even shows us without our explicit knowledge or permission.
It’s easy to dive into a philosophical argument about what makes a photo. They’re no longer the end result of a silver-nitrate coated substrate exposed to light for a precise amount of time and developed in chemistry. It’s light captured on a sensor converted to 1’s and 0’s onto a storage medium which is then converted into an arrangement of colored pixels into an image displayed on a screen. It’s not that it’s not real, it just lacks soul.
When I initially wrote this, I was responding to the Google Pixel 8 advertisement. It’s ad campaign is entirely predicated on “AI” and it’s computational photography technologies. The ads highlight it’s seamless face swapping ability in group photos, the ability to change and manipulate subject matters with the tap and pinch of your fingers, audio noise removal, and much more. The “Magic Editor” has some jaw dropping background and content replacement capabilities. It’s a bit terrifying. I’ve always been into technology and photography and have always tried to stay on the cutting edge, but the advancement of machine learning this past year alone has made it impossible. They’ve laid the groundwork for the future and it’s going to shake up the photography industry in a way that will leave many people scrambling to find their place. I imagine it’ll happen in every industry. I had a hard time keeping up from the beginning, it’s impossible now. What can I contribute that can’t be rendered by a computer with a creative prompt instantly? Obviously the answer should be to create because you want to and have the urge to like any real artist but we are now living in a new reality. We all have a very daunting opponent that has resources beyond our wildest imaginations.
Less serious considerations point to the aesthetic side. You may simply not align with how “they” (Apple, Samsung, and Google software engineers) think your photographs should look. Global edits are enhanced to parameters like saturation, you’ll see more detail in shadows and highlights, they might add a bit more contrast. It’s as if they put a general preset over the image. This is done a number of ways, likely by making simple edits much like you would using a slider in Lightroom or Photoshop, but it also works by taking a series of images and merging them to create a HDR image. Although this has gone beyond the HDR setting you’ve seen on your phone that you choose to either turn on or turn off.
In my opinion, ethics are the biggest question that will be brought forward by computational photography and AI/machine learning. My first question; drilled into me by my photojournalism class in undergrad, is how will we test the veracity of these images if we need to? As the world enters continued turmoil and conflict, and the prevalence of smartphones being the foremost common way to make images, how would the World Press Photo awards verify images taken on a smartphone if no RAW file exists? I’ll admit as an iPhone user I haven’t looked into the way the Pixel 8 handles original files, but as far as I know with Apple; unless you specify ProRes RAW shooting you get what they give you as the original image, either in HEIC or JPEG if selected. There’s no way to get back the original image you saw before your phone made those aesthetic changes. Keep in mind your phone may have taken and merged 5 or more images to create the evenly exposed image that ends up being the one and only file you have. Not an issue for a picture of food for Instagram, sure. What if you were witness to a missile strike? Or a catastrophe you want the world to see? Plenty of photojournalists have gotten their reputations tarnished due to external manipulation, whether it be Photoshop or staging a scene. I am curious how images made with phones that advertise these advanced features will be favored by judges at a competition like the World Press Photo awards.
The issue of reality in photography has always been a question, and the debate will never cease. Photographers have been manipulating images to trick people since the dawn of photography. But much of this was done by skilled photographers and retouchers for artistic purposes, magazines and specialized portraits. We still have an expectation that everyday snapshots are legitimate. My main draw to photography was it’s ability to capture the world in seamless detail. The fact that light itself etched it’s reflection onto either a photographic negative or sensor, and the thrill of the hunt for that ever elusive Decisive Moment. My concern is that since it’s now so easy with computational photography tools, and machine learning/AI that require less skill than even a year ago, we will very soon create a world filled with so many falsehoods and misconceptions of everything that we will be unable to trust any image we see. I worry not just what this means for the integrity of photography, but for how we as a human culture view the integrity of the world.