As the sun went down on Berlin, it came up in Portland: once more, Jeff Dorgay and I convened across Internet wires, Macs and microphones for another episode of the Darko.Audio podcast. Monday’s Euro sunset turned in a spectacular performance – enough for me to interrupt the podcast’s recording to take four shots with a Canon EOS 200D camera. See above (but also below).
Question time: how accurate is my photo? How close is it to the actual event?
Framing a shot effectively crops out reality’s width and height to render it instantly compromised. Shifting one’s aim – even slightly – can net vastly different results. That’s even true of photos taken within split seconds of each other.
Was this the sunset that I saw?
Or was this it?
The answer is both. A single event yielding numerous photographic truths sees human interpretation as a determining factor from the outset.
Likewise our hardware choices. A better camera with a full-frame sensor and higher-quality lens would likely have given us superior tonality and subtler hue gradation – greater accuracy – but still a long way from seeing the sunset with our own eyes.
Assuming the best camera and photographer on the planet, each mode of operation will net different exposure levels. And that’s before we factor in any differences between manual, auto and semi-auto operation. It’s doubtful that the best camera and photographer remain singular in the camera world. More likely is that many versions of best exist. Numerous subjective truths extracted via ocular interpretation.
The human factor seems inescapable. In Berlin, did I shoot in full auto or did I assert control over a variable or two? Had I taken two shots, one in each mode, which shot would have been truest to the sunset’s reality? After the fact, we have no idea. The moment is gone.
Once the shot moves from SD card to hard-drive, software plays its part in finessing the truth. Did I juice the colours in Lightroom for my sunset shots or did they remain untouched? Did I correct for underexposure and to what degree? Ditto highlights. You’d never know without seeing the off-camera original.
And what of reader screens relaying this article and its images? A few will be colour-calibrated but most will not. By the time we get to asking which smartphone, tablet, laptop or a desktop monitor is the truest to the original event, spied with my (and not your) little eye, we hopefully appreciate just how many variables sit between the UV filter screwed to the end of the camera lens and human eyes looking at the photo a few days after it was shot. Variables that cannot be reversed engineered back to the time and place of the original shot. And that’s to say nothing of how ambient light subtly interferes with photon movement from screen to eye.
Did the camera capture the live event? You weren’t here in Berlin — so how can you know for sure? And if you happened to be standing with me on my Stadtmitte (city-centre) balcony, how strong is your ocular memory? Likely not strong enough to hold on to the subtleties that separate one hardware/software combination from another. It’s Thursday and the photos in this post are more vivid than my memory of Monday’s sunset.
The parallels to the audio world should be readily apparent: discussing the proximity of a photo displayed on a screen to the original event is to ignore the numerous complexities and decisions involved in capturing, post-processing and displaying the illusory snapshot of reality.
Taking the long way ’round: starting with IFA and journeying through the Sonus faber Aida, CEDIA, the Kii Three, the Harbeth Compact 7ES-3 and room acoustics, Jeff and I discuss whether or not an audio system can be “true to source” and/or is capable of reproducing a live event.
Follow us on Soundcloud here and subscribe to the iTunes feed here.