I have never lacked confidence in my iPhone photography.
I can honestly look at my iPhone photography and say, it’s good., oftentimes, really good.
No false humility here.
This goodness sometimes is a result of Talent. Other times a result of Technique. Still other times a result of Technology, like Deep Fusion, for example.
One of the computational pipelines of iPhone photography is Deep Fusion.
It’s truly, as Apple puts it, “Mad Science” in action.
The user can’t enable or disable Deep Fusion.
It happens, under the hood, automatically, when you take and make iPhone photos. Click.
When you, the photographer, click the shutter button, the iPhone camera, unlike dedicated cameras, isn’t taking a single photo but, often, 9 separate photos and then merging them together at the speed of light.
Think about this. Every photo you take has Nine Lives.
No wonder why my, and your, iPhone photos, enjoy so much qualitative goodness.
It’s trippy to think about the science that is happening, with your photos, each time you press the shutter.
Deep Fusion works on the iPhone 11 and later models.
Deep Fusion is similar to but different than SmartHDR (another computational pipeline).
Deep Fusion tends to work with low-to-medium light conditions and SmartHDR works in bright-to-low light conditions.
You can think of Deep Fusion as a feature that enhances the texture and clarity of your photos, while Smart HDR is a feature that enhances the contrast and colors of your photos. They are both automatic and invisible to the user, so you don’t have to worry about choosing which one to use.
This is generally how it works.
When you press the shutter button, the camera captures a short-exposure image and a long-exposure image. These are the two main images that will be used for the final photo.
The camera also captures four additional short-exposure images and three additional long-exposure images before and after the main images. These are “intermediate” images that are mainly used for the rehosting process.
The short-exposure images are used to capture details and textures of the scene, while the long-exposure images are used to capture the colors and contrast of the scene.
The camera then sends the nine images to the Neural Engine, a dedicated, computational, machine learning process that runs a Deep Fusion algorithm.
The Deep Fusion algorithm analyzes the nine images and selects the best pixels from each one, based on their quality, sharpness, and noise levels, and blends the selected pixels into a single image.
The result is a photo with more detail, less noise, and a better dynamic range than any of the individual exposure.
Maybe I’m not as good as I think :)
Click
Jack