What's new in the Apple iPhone 14 camera
Last week, Apple unveiled its new generation iPhone 14. While the new generation devices are largely iterative upgrades over 2021’s iPhone 13 series, Apple did introduce a number of new features for its camera — particularly the iPhone 14 Pro. Among all the changes are two key major new features, which include a new ‘Photonic Engine’, and finally, one of the new crop of ‘high resolution’ sensors.
On this note, here’s looking at what Apple has really changed in Apple’s latest generation iPhone camera — and if that would make any difference to you, the user.
Pixel binning on the 48MP main camera
The biggest change, of course, is the 48MP main camera on the iPhone 14 Pro and Pro Plus. Apple has, for the longest time, resisted an arbitrary upgrade to a modern high resolution sensor — which have been around on Android smartphone cameras for nearly four years now. In fact, as Apple introduced its first 48MP camera phone, Motorola came up with a 200MP camera phone — something that was so far only seen in Hasselblad medium formats — for core photography and cinema shooting professionals.
On the face of it, the 48MP pixel binning on the iPhone 14 Pro works similar to any other smartphone. Apple’s image signal processing (ISP) algorithms, however, will automatically decide when to fuse pixels together — and when to go for the full 48 million pixels. If you have plenty of light around, it would be easier to take a full 48MP image — since the pixel size in a high resolution image sensor is smaller than that of an industry standard 16MP sensor.
For users, you will not really be able to control when Apple shoots ‘binned’ images, which is when it would combine four pixels to create one ‘super’ pixel. This would then lead to images that are 12MP in size, which is yet again the industry standard for mobile photography.
The only case where 48MP images can be forced are in the ProRAW mode — which is where Apple will create a single 48MP image without any compression algorithm applied.
Compression algorithms process the data that a camera captures, and reduces it in size to create portable image file sizes.
The Photonic Engine
Combining with the high resolution sensor is Apple’s new image processing engine, called ‘Photonic Engine’. Essentially a continuation of Apple’s Deep Fusion neural engine technology that used High Dynamic Range (HDR) technology to improve contrast, dynamic range and low light portions in images, Photonic Engine applies its neural network computations right after the ISP processes colour and light data — and before any compression is applied.
Simply put, think of this as an early version of Deep Fusion, where the processing and improvement of images occur before Apple’s algorithms reduce the size of the image for you to see. This gives the algorithms more data to process, which in turn should improve the quality of improvements that Deep Fusion could do. It is somewhat like HDR Mode on steroids, since it processes a much larger amount of data.
This can technically offer a better balance of noise, grains, details, colours and shadows in low light, but how the implementation fares in the real world remains to be seen.
‘Optical quality zoom’
The high resolution sensor will also allow Apple to offer “practically four” focal lengths, as it said during the keynote. While Apple’s telephoto focal length offers 3x zoom, Apple has added a ‘2x’ level as well, which will crop into the sensor to offer an image that looks zoomed — but is in fact digital zoom only.
To improve the image, Apple will bin the pixels to improve the level of details — which is what gets severely hampered in digital zoom standards.
Action Mode
Among other changes, Apple has also showcased ultra-stabilised Action Mode video shooting. Apple hasn’t mentioned the use of a gyroscope-based stabiliser, so this is most likely a feature that is enhanced by software. Yet again, this can be achieved by cropping into the image sensor — which the new 48MP sensor can achieve. Algorithms can subsequently adjust the amount of movements captured in videos, which is what will lead to stabilised videos.