The smartphone is poised to take over digital photography. For the Canons, Nikons, Fujis or Leicas, a smart response should be to embrace the app ecosystem. Are they ready for it?
Returning from a trip to Australia, I write this on the flight back to Paris. I took advantage of an invitation to the Storyology Festival in Sydney and Melbourne to spend a few days in Tasmania. A few years back, I would have carried a bulky full-frame DSLR and a couple of my favorite lenses. This year, I settled for a Fuji X100 T, a nice compact with great performance. Guess what happened? I took most of my pics with my iPhone 7+.
I will sell all my pro gear. In my case, I better invest in the top of the line smartphone (i.e., trading for a new one every 12–18 months), than spending much more for a slight upgrade to a DSLR or mirrorless camera.
Why? Four reasons: Performance, interface, post-production capabilities and connectivity.
Granted, I miss the bokeh effect (background blur) of my DSLR. Sometimes. The iPhone 7+ addresses the issue with its dual camera system (with a “long” lens equivalent to a 56 mm) and clever digital processing that together conspire to add artificial depth-of-field effect in portrait mode. Below is a portrait of a Tasmanian friend without and with the depth-of-field effect:
I see comments coming: “You know nothing about photography. You are unable to see the difference between a true optical bokeh and a fake digital one. Plus, you completely disregard the poor performances of your iPhone in low light conditions”.
Here's my viewpoint:
— Expect $4,000 to $6,000 and 2 kilograms of gear (a full-frame DSLR and a quality large aperture lens) to have the magnificent depth-of-field effect, vs. $800 and 160 grams for my iPhone.
— True: In a smartphone, blur quality in a portrait is inferior. But we all know it will get better with each new software release. (And for each new version of the Image Signal Processor and each sensor generation.)
— As for the low light performance, true again, the laws of physics still apply. Just keep a number in mind : an iPhone sensor is roughly 57 times smaller than the one in a full frame camera, hence a much higher pixel density (if you want all the numbers, go to DxO Mark, the ultimate digital imaging reference). Since the size of the pixel is directly proportional to the number of photons it captures, a large sensor will work way better, generating less “noise” (parasites) in the picture taken in low light. But — again — software is already taking care of this. On a desktop, you can use noise removal software such as the one in DxO Optics Pro which works spectacularly and is easy to use. On a mobile device, many “denoiser” software included in inexpensive apps already do a more than passable job.
In conclusion, the performances gap is narrowing at a fast pace. Unless you covet a spread in a magazine or a photo exhibition, an iPhone or any high-end smartphone image, properly processed, works surprisingly well. (I recently met a National Geographic photographer whose story included images taken with his iPhone 6.) Which brings us to the second argument:
The major shift we are witnessing is the democratization of post-processing. Years ago, Photoshop ran the show; later, Lightroom became the desktop post-processing software for photo enthusiasts. Today, most image correction can be done directly on the device by using one of the dozens of apps available on iOS or Android. I tested many of them and settled for Snapseed. It allows all possible adjustments, including processing RAW images, in a very intuitive manner (a world of difference with Photoshop). Instead of dealing with multiple folders where I used to put different selections stages I now have a neat workflow: images captured with my iPhone, selection made in iOS Photos, processed in Snapseed and put on Instagram. All images find their way automatically on my MacBook courtesy iCloud (where they stay uncompressed) for later use. (I trust a similarly convenient workflow is available with Android phones and Windows or Mac desktops.)
The case for Android onboard high-end cameras
Instagram has become an essential promotional tool for professional photographers. All of those I talk to tell me there is now a direct correlation between the assignments they are able to get and their audience on Instagram. As for now, to post on Instagram, if they don’t shoot directly with their iPhone, they need to go through the cumbersome process of transferring their pictures from the device (sometimes via their computer), then processing them before posting.
Wouldn’t it be great, then, if it were possible to process a photo right in the camera and beam it in on Instagram once a wifi access point is available? Suddenly, a Sony A7 (my dream camera) with its unparalleled low light performance, would have access to the giant trove of photo apps available on Android. Even images taken in Raw format could be “developed” on board. (My dream would be to see Apple license a version of iOS on two or three trusted brands, but we all know Cupertino will freeze over before it happens.)
I’m absolutely certain that connecting high-end cameras to the Android app ecosystem would greatly enhance their market share and most likely ensure their survival.
Let’s be blunt: except for a pro and semi-pro segment, the digital market now belongs to the smartphone. The main reason: the level of R&D expenditure in the smartphone sector vastly outpaces the camera industry's investment power. Think about it: the Apple Imaging Group which designs the camera, the ISP and the different pieces of software, has over 1,000 engineers. And this says nothing of wireless engineering and general OS development groups.
A new generation of cameras can also change the game.
The DxO One camera which captures images in standalone mode, but can also be plugged into the lighting connector of the iPhone is an interesting avenue. Its low light performance is stunning (see DxO on Instagram) as its sensor is 8 times larger the iPhone’s while still offered in a super compact format. The DxO One goes far towards the merger between a high-performance camera and a smartphone ecosystem described above.
More futuristic is the Light.co camera, which relies on… 16 sensors working in parallel to produce a composite image — a rather heavy one. Great concept in theory: this camera draws its components from the smartphone world (acrylic lenses, sensors, ISPs). But there is a great deal of skepticism regarding the huge computational load required to create the composite image, and related power management challenges. (I sent to Light.co a full page of questions about this intriguing product… they never got back to me). On the same principle (multiple lenses and on-the-fly complex digital processing), Lytro tried without any tangible success. Let's not abandon all hope: Moore’s Law is on the side of these two adventurous manufacturers.
Around 2021, “High Definition” Smartphones, equipped with six to 13 different cameras will have zoom & depth functionalities of a DSLR via its hardware and advances in computational photography. (…) By 2022, new smartphones and tablets will have at least four cameras and will drive growth of content for Augmented & Virtual Reality.
To conclude, the reason why I believe in merging the traditional digital imaging sector and the smartphone world is the importance of software. Whether it is to address physical constraints such as the lens size and sensor format, or post-processing, software will keep eating the world.
In the traditional camera world, progress is incremental, versus exponential on the smartphone side.
Memo to camera makers: put Android in your device or face extinction was originally published in Monday Note on Medium, where people are continuing the conversation by highlighting and responding to this story.