Innovation? - Don't look at camera manufacturers ...

... yet digital photography experiences constant pushes towards innovation. This innovation does not come from DSLR cameras - they are in a way held back by their heritage from analogue times. Innovation is happening with smartphone photography though!

For more than a century, photography was dominated by single lens setup. This was the most efficient way to capture images in an analogue world. Cameras with multiple lenses existed - for example for stereoscopic images - but were far more complicated to use and to view. Camera designers have long been and to a degree still are stuck in this analogue mindset, when it comes to designing digital cameras. Film was simply replaced by digital sensor to catch light. But digital cameras are computers with attached lenses. Consequently software and algorithms are starting to play bigger roles in the way, captured light is transformed into images. We might be standing at the beginning of the age of ‘computational photography’, that will turn long-established rules and precognitions on photography on its head.

Most digital cameras today are still caught in an analogue mentality

In the middle of the 19th century, the German and American engineers Philipp Reis and Graham Bell developed the telephone – a machine that allowed communication over vast distances. In the meantime, the telephone has come a long way to today’s smartphones, digital devices that you can carry around with you easily. But traditional voice calls are now only a fraction of what people do with a smartphone.

Minolta X-G1 SLR Camera
Foto: Timo Kozlowski

Photography on the other hand is an invention that shares a similarly long history – yet if you look at a DSLR or mirrorless camera, that was released only until some years ago, you find that seemingly much less development took place in comparison to the one from telephone to smartphone. Mirrorless cameras from Fuji and Olympus even emulate the aesthetics and user interface of 1970’s film cameras.

What made this single lens paradigm so enduring even in digital cameras?

The Single Lens Paradigm - Efficient for analogue photography

One reason for this endurance was, that it was the most efficient way to capture images with an analogue camera - and engineers and customers did not realise at first, how disruptive digital imaging techniques really were.

In the beginning, photographs resembled a form of art, that people were accustomed with: paintings. Just like photographs, paintings are two-dimensional representation of a three-dimensional world. Before the advent of photography, paintings and drawings were the only way to preserve moments in history, landscapes and people in such a way. Photography quickly took that place – and kickstarted an impressive creative energy to develop visual arts further. It’s not by chance, that artists like Gauguin, Van Gogh or Picasso introduced new and original ways of seeing things into the world of art, while photography ate into traditional markets of painters.

There have been different setups with more than one lens in the analogue world, too. A Twin Light Reflex camera (TLR) is equipped with two lenses, but they serve different purposes. One lens is for the photographer to compose the image, one is to direct light onto the film. So what the photographer sees is a bit different than what will be captured on film.

As humans have two front-facing eyes, we perceive the world in three dimensions, the majority of photographs was two-dimensional though. There have been different camera setups to capture depth-information and encode it into flat pictures. Stereoscopic photography goes back as far as the 1840’s, for which you needed special cameras and viewers (who could have thought, that Virtual Reality has such a long history?) And then there was also the Creature from the Black Lagoon and other 3D movies that were filmed on analogue equipment, projected with special projectors. The audience had to wear green-red glasses to see the 3D effect.

So there were setups with more than one lens for film cameras, but they were complicated to use and not readily available for the general public.

Digital Cameras are Computers – Even though they tried to deny it

Digital cameras on the other hand are computers with attached lenses and image sensors. They are not all-purpose computers that can be freely programmed, but they are highly specialised computers with software partly hardencoded into the hardware.

Traditionally, digital cameras followed the setup of film cameras to a wide extent. The optical hardware is comparable or even identical – you could mount the first EF-lenses that Canon introduced with its EOS line of SLR cameras onto the recently released Canon EOS 5D Mark IV and start shooting.

Digital Data can be manipulated in many ways. But up until some years ago, camera makers were seeing camera development mainly through their eyes trained in the analogue age. Editing of the data in the camera was rather minimal – colour adjustments, noise control were mostly done in camera and then saved as a JPG or RAW file. Just as you would have developed your film and the prints in the darkroom, photographers were also expected to develop their images in a digital darkroom – on a computer with software like Photoshop or Lightroom. Nomenclature in photo editing speaks volumes of the analogue ancestry.

Also getting in contact with the outside world was relatively cumbersome – plug your camera into the computer, download the images, edit (if you like) and then distribute them from your computer. In 2016, the trend for WiFi-enabled cameras has even reached the upper echelon of professional grade DSLR cameras like the new Canon EOS 5D Mark IV or the Nikon D750. But still it doesn’t feel natural yet, to use WiFi on a camera, as the user interface is designed with concepts from pre-WiFi days. The integration of WiFi into the Nikon D500, which worked only with activated Bluetooth, is a prime example, how a manufacturer regards a DSLR more as a camera than a mobile digital device.

Smartphones are (among other things) cameras that are digital by nature

Smartphones are called phones, but as every user knows well, this represents an increasingly smaller part of what this kind of device can do and is used for. They are more like pocket-size computers with a much higher computing power than the NASA computers, that put a man on the moon. And they are cameras, too!

In a way smartphones might be considered the first truly digital cameras, as their designers shook off analogue design ideas completely and went into new directions. Instead of highly specialised hardware to emulate, what you could expect from a film camera, a smartphone is a computer for general usage that can be expanded through apps to change images in a myriad of different ways. Editing could be made while the picture is taken, or in a more complex way after that. Capturing light was combined with computer algorithms.

What is “Computational Photography”?

The home planet

Foto: CC BY-NC-ND 2.0 Skip Steuart (Flickr)

You might simply disregard “Computational Photography” as just another buzzword, that flares up and withers away, but I think, that there is more to it - because we use it quite regularly already. It is the combination of using opticals with software to enhance the scope of the application of photographic techniques. In that way, swipe panoramas are computational photography, to extend the viewing angle of the lens. Also HDR modes, that every recent smartphone is capable of now, belong to this.

For DSLRs the so-called “full-frame” sized sensor is the non-plus ultra - about the same size as a frame in a 35mm film. As there is a single lens, the larger the sensor and the single pixels on it, the more light it can catch and the better the low-light performance will be and the shallower the depth of field can become.

Lenses for DSLRs are designed mostly with an emphasis on optical image quality, so it can be quite heavy and take up much space. Unthinkable for a device that is designed to be primarily mobile! The miniature lenses and sensors in smartphones come with a number of compromises in optical quality and without optical zoom. As engineers cannot work around the laws of physics, they had to think outside of the analogue box to improve image quality. Nokia for example used a combination of a relatively large 41 MP sensor and combined it with proprietary algorithms to fight image noise and provide a better quality digital zoom for the 7 MP pictures, that the Nokia 808 PureView saved by default.

So the logical step was to question the one lens paradigm. If algorithms are needed to overcome physical limitations of the hardware, why rely on only one source of image data? HTC was the first smartphone producer, who broke with that concept. Its 2014 HTC One (M8) was perceived rather as an oddity at the time, especially as the overall image quality was criticised by reviewers and users. The technology came back in 2016, this time to more applause. Huawei P9, Apple iPhone 7 Plus and HTC VC 20 all feature data from two cameras and combine various streams of data into one picture. They follow different strategies though.

Double Camera on Huawei P9 Foto: CC BY-SA 2.0 Timo Kozlowski (Flickr)

Huawei combines two regular sensor chips and lenses, but one of the two sensors misses the colour filters. So this sensor produces black and white data, that is supposed to provide more details in contrast and should require less light. Colour and monochrome images are then combined into one image.

Apple and LG use lenses with different focal lengths in their multi-camera setups, which allows for better quality digital zoom through interpolation of data.

However, all multi-camera setups allow to play with the focus after the picture was taken. As this setup allows the smartphone to capture stereoscopic information about the motive. Together with the image data from two cameras, focus and depth-of-field become variables in image editing. To some extent, that is. Because the quality of what you get can vary depending on what you photographed. However, the first time you see this in action on one of your own photos is truly mind-bending.

What will the future bring us?

1) Computational Photography is here to stay.

Computational Photography is a new technology, which still has its glitches and does not work reliably in every case. And quite often you can distinguish whether an effect was done with optical means or with algorithms. So for the time being, computational photography might not be interesting for professional photographers for regular client work. But it’s fascinating to play around with. HTC might have released their dual camera setup before the time was right, but 2016 might have been the year it entered mainstream with different smartphone manufacturers going multi camera already.

2) Smartphones for the masses, DSLRs for professionals.

Smartphones have already eaten into the market of point-and-shoot cameras, and before long both categories will merge into one category. You might remember Samsung’s hybrids between point-and-shoot and smartphone, that have stayed in their respective niches. It felt funny to use a device, that seemed to change its character completely when seen from the front and the back. UK-based company Bullit licensed the Kodak Ektar brand for a similar point-and-shoot camera with smartphone hybrid.

Multi camera setups and computational photography could bring both worlds closer together. Also in 2016, a start-up presented the prototypes of the Light 16, a camera with 16 different lenses mounted onto one array and the promise of DSLR quality images from a device the size of a pocketbook. The company’s CEO hinted, that they might licensing their technology to smartphone manufacturers for multi-camera setups in future models.

Up to now, nobody outside of the company seems to have been able to use the existing prototypes, so the judgement is still open on that one.

3) The difference between taking a picture and editing it will become blurry.

We have seen it with popular apps like Instagram, Camera 360 etc. that can apply filters to the live image – the hard border between shooting and editing (in the digital darkroom) got blurry already – and it will become even more so. We can – if we want to – make decisions on the look of a photo on spot. If that’s a good development remains to be seen.

4) Algorithms will make things possible, we haven’t dreamed of yet.

Only some years ago, I was convinced that an out-of-focus picture could go directly to the bin – now I can play around with the focus on a readily available smartphone like the Huawei P9. Regarding computational photography, we have barely scratched the surface. I am sure that in the not so near future, other predicaments we have experienced in the past, will be overcome with algorithms. The possibilities, what you can do with digital data, is only limited by your own imagination and the power of your computer.

Focus Correction on Huawei P9

The camera app on Huawei P9 makes focus and depth of field changeable in post-production on the phone. Image: CC BY-SA 2.0 Timo Kozlowski (Flickr)

5) Photography will turn from documenting the world around us to into creating new worlds.

The objectivity of photography has always been brought into question. Digital photography smashed it. Before it was in post-processing, where you could radically alter photos – from changing colour moods to collages made from different images, everything is possible in programs like Photoshop. With computational photography and augmented reality you can make radical changes at a much earlier point. For example Sony provides a special camera mode for their Xperia phones, in which computer-generated dinosaurs, dwarfs etc. appear on your screen, superimposed onto the live camera image. If you point the camera to a flat surface, like a table or a plain, the effect can be quite convincing. Or think of the many Snapchat selfie filters, that put glasses, wigs etc. on people’s faces, the ‘beauty modes’ for selfies especially in smartphones from Asian companies, or to enhance science books by blending 3D models of a heart over the textbook page about the human heart.

The possibilities are overwhelming already, posing questions on what is real? In David Lynch’s “Lost Highway” this questions is at the heart of the film’s narrative. At the beginning, you hear this dialoge:

Ed: Do you own a video camera?

Renee Madison: No. Fred hates them.

Fred Madison: I like to remember things my own way.

Ed: What do you mean by that?

Fred Madison: How I remembered them. Not necessarily the way they happened.

In 1997, when “Lost Highway” was released, there was still a certain trust into the objectivity of a photograph - Beauty modes on selfie-centered cameras, etc. can transform the proof of past events ‘the way you remembered them’.

Especially for journalists and historians, this will carry heavy implications on their work and professional ethos. For example Reuters news agency from November 2015 onwards only accepts photos, that were taken in JPEG format rather than RAW. They cited reasons of speed but also that RAW files provide much more possibilities to edit images in a way, that could alter the impression of what happened.

0
0
0
s2sdefault

Suche

Free Joomla! template by L.THEME