What will 35mm Full Frame and APSC format camera figures of merit be in 2022?
I am kind of lucky as a hobby photographer. I live in Japan, and my day job is working in the semiconductor industry. One of the main products the company I work for makes are CMOS Image Sensors for multiple customers. Consequently, I wanted to share with you a little non-technical peek at some of the advances in cameras and lenses that are coming up in the next several years.
I will break this down into CMOS Image Sensors, Image Processing Integrated Circuits, and a smaller amount on lenses and other camera subsystems. Much of this material can be found on the web – especially if you look at patent filings by the major camera companies.
The Image Sensors
CMOS Image Sensors - the heart of the camera. They are the single most important component in the camera and can cost ~$800 to $1,000 USD on a high end full frame camera. Pixel sizes are settling down to be between 3.2μm to 5.5μm in pitch, with the sweet spot between resolution and photon capturing capability being about 4.3μm. This equates to a range of ~ 30MP to 60MP for Full Frame and 20MP to 28MP for APSC cameras. Most sensors made with pixel pitches below 4.8μm will be Backside Illumination, with most sensors 5.0μm or higher being Frontside Illumination. Sensor dynamic range will increase ~1.5 stops in the next 3 years to about 16.0 to 16.5 stops. Frame rates for 24MP sensors will be 60 Frames per Second (FPS), with higher end models reaching 120 (FPS). For 35MP and above sensors, 30 to 60 (FPS) will be the range.
Why not 200MP DSLRs? While there are currently 160MP APSC sensors and 250MP full frame sensors available, they are used for machine vision applications. The issues with using these very high pixel count sensors on DSLRs or Mirrorless cameras are many. The frame rate, (frames per second) is too low because of the huge amount of signal transfer required to get all that data out of the sensor, the signal processing requirement is immense, and therefore there are significant power consumption and temperature generation issues that cannot be easily overcome in a portable DSLR. Lastly, the pixel pitch must be decreased to below 2μm, (similar to many cellular phone cameras). Dynamic range and low light sensitivity are significantly impacted at these pixel pitches.
Noise and stacking the sensors
Total image noise will drop to a little more than half of what it is today for high end sensors by a variety of techniques that are beyond the scope of this short article. However, suffice it to say that these noise reductions will significantly aide in low light photography. With such high frame and data rates, stacked CMOS Image Sensors will become the norm with backside illumination Image Sensors. The stacked sensor will consist of an Image Sensor, a DRAM-like memory buffer integrated circuit and an Image Signal Processing integrated circuit being merged together in a three-chip stack.
The stack will be connected by Copper electrical stud connectors approximately 800 times finer than a strand of human hair. Note – this stack is not the same as the Sigma Foveon sensor, where each pixel is vertically “stacked” to record Blue, Green and Red photons at sequentially deeper zones in the pixel. This stacked sensor architecture will be needed for the high frame rates, along with the reduction in total sensor noise.
Silicon is still the standard
Sensors will be predominantly Silicon based but will continue to employ newer pixel reflection techniques such as ultra-small metal mirrors and different refractive index material layers in order to maximize the number of photons that interact with the Silicon or other charge-producing material employed in the pixel. The added benefit here is that this will allow less photons to “leak” into adjacent pixels. This is important because pixels employ Red, Blue and Green color filters. Allowing one pixel to leak too many photons into another adjacent pixel produces false readings on colors, leading to less contrast in the image, and less perceived resolution, (lower sensor MTF).
Organic is coming
Finally, Organic Image Sensors will make their debut in 2020, but only for broadcasting at first due to higher dark current resulting in lower signal to noise ratios. You can think of these sensors as they are the inverse of Active Matrix OLED displays that you are used to looking at on your cellular phones, (absorbing photons instead of emitting them). The material used to produce organic sensors is also somewhat similar to OLEDs, but the precise chemistry is different. Their merit is that have very large quantum efficiency and full well capacity with respect to today’s Si-based CIS sensors, and the sensor sensitivity can be easily modulated by an applied voltage to transparent electrodes located above, and reflective metal electrodes located below the organic compound. Therefore, an extremely high dynamic range is possible in a sensor that is less expensive to produce than Silicon-based Backside Illumination sensors.
Image Signal Processing Integrated Circuits (ISP ICs) are the brains of the camera. An often overlooked yet fundamentally critical component of any DSLR is the ISP, (for Nikon these are the “Expeed” series of integrated circuit chips touted in your camera specification sheets). The ISPs function is to manipulate the pixel data that comes out of the CMOS Image Sensor and transform it into a useful format that can be written on a non-volatile memory card. Let’s take a step back and understand one level deeper what is going on here.
Your Nikon, (or other non-Foveon) sensor is essentially a 2-dimentional array of pixels that collect photons of light that interact with the semiconductor material in the sensor through creation and collection of electron charge in each Red, Green or Blue pixel. Note – this excludes a small number of Phase Detection Auto-Focus pixels and several other reference pixels used for other purposes. The exposure process takes place during the time governed by the shutter speed that you define for the photograph. In each pixel there is a light to charge generation region, (the photodiode), a charge collection region that is attached to 4 or more readout circuit transistors, (usually 5 to 6 in today’s sensors) and in most cases, some intermediate charge storage region called a “floating diffusion”.
The output from the pixel readout circuitry is a signal that travels to the sensor row and column peripheral circuitry for further transformations such as amplification, column correlated double sampling for noise reduction, and subsequently signal digitization for Nikon and several other brand sensors. These “transformed signals” then feed into your ISP.
Signal conversion in ISP, or not?
One should note that Nikon transforms the pixel charges into a digital format on the sensor peripheral area using an Analog to Digital Converter, (ADC) while Canon choses to do this later while in the ISP. Both techniques have their merits and demerits, but it is generally accepted that digitizing the signal as close to the pixel array as possible maintains superior fidelity. The key points here are to transfer the data rapidly and without distorting the signal parameters before they enter the ISP.
The ISP in detail
So – what does the ISP do? That depends on somewhat on you, and your preference for file type. The main operations an ISP performs are manipulation of the “image signals” into the format and specific coefficients that you select. There is some input buffer memory to allow high frame capture rate while not bottlenecking the data as it moves into the ISP. One of the functions is to perform pixel interpolation to achieve full pixel resolution.
Nikon sensors employ a Bayer-type Color Filter pattern with a “Red/ Green/ Blue/ Green” repeating square pattern. In a Bayer pattern sensor, ¼ of the pixels are Red, ¼ are Blue, and ½ are Green. By comparing neighbor pixel measurements, each pixel then represented as a blend of the pixel I question and components of the nearest neighbors. This gives you a full resolution sensor with the color spectrum defined by a bit count. Some other signal adjustments are also made.
If you photograph in RAW image format, the output is then quantized into RAW writeable format, and is sent to the camera’s memory card for storage along with the EXIF data. It you photograph in *.jpg format, then many other transform functions are performed as well based on your desired *.jpg in-camera settings, (contrast, saturation, etc.) along with firmware coded transforms from the camera manufactured. After these image manipulations are completed, again the output file is written to the camera’s memory card.
Digital data exiting the ISP in RAW, *.jpg or other formats are written on ultra-high write speed non-volatile memory card, (that will have a transfer speed of > 500MB/sec in the year 2022). All of this takes a lot of power at high pixel counts and high frame rates. Therefore, the ISPs will be fabricated at 7nm to 16nm or processing nodes, (Trigate / Finfet architecture uses transistors approximately 10,000 times thinner than a human hair) with a balance between low leakage and high performance.
For reference, today’s ISPs are processed at about 28nm technology. Color range will most likely remain at 14 Bit in 2022, (14 Bit is calculated by the number 2 multiplied by itself a total of 14 times = 16 384 colors). It will likely not increase more than 1 or 2 bits from today’s levels because the data handling requirements double with each additional bit.
On a related note, Lithium Ion Batteries will employ higher power storage density techniques using more exotic dopants, (small quantities of different elements such as Manganese or other transition metals on the periodic table). These will be used to enhance the overall charge-holding or charge transfer capability, or ensure electrodes are not as easily coated with undesirable material that diminishes battery efficiency, and / or to enhance the quick charge characteristics of depleted batteries. This will increase the number of shots per charge of a battery by about 25% to 30% over the current performance with the current frame rate and data amounts in production today, and will decrease charge times by about the same percentage or even more, (perhaps 35% to 45%).
However almost all gains in battery technology shot count between charges will be erased by higher frame rates, larger data transfer volumes, always on sensors (mirrorless) and the associated higher power requirements. For clarity, with properly defined battery manufacturing tolerance specifications and a rigorous outgoing quality inspection processes, there should be basically no danger of fire from these enhanced batteries if they are used properly.
Finally, Hydrogen Peroxide-based or similar type chemical fuel cell energy storage devices will probably not be available for the for the next 8 to 10 years.
Lenses are the eyes of the camera. Lens technology will continue to improve as newer low dispersion and higher Refractive Index glass formulations are developed. This has already been realized with the Zeiss announcement of Aluminum Oxynitride lens elements several months ago.
Lenses will get lighter, but this will be offset by more complex / highly corrected optical solutions. Fresnel lenses, (lenses with circumferential micro grooves) for one more of the elements will become more commonplace.
More complex and effective multi-layer optical coatings will be employed to reduce optical losses in the image path. Perhaps a “metamaterial” lens element will be introduced into lenses. These metamaterials elements can provide negative index of refraction, that will significantly enhance the optical properties of the lens cell at very low weight through a process of semiconductor-type 3d sub-micron “features” etched on a planar glass blank.
Many challenges remain to fabricate apochromatic metamaterials lenses, and introduction of these semiconductor processing patterned electromagnetic waveguide elements may not come out for another few years after 2022. Of course, there will be less brass metal employed in the lens cell, and more high-density plastics used to reduce weight.
Currently the display technology is ~ 2.3M “dots” for a 3.2 inch (82mm) diagonal display, (W x H: 2.7 x 1.8 inch, 68.6mm x 45.8mm). In 2022 this same sized display will have approximately 30M dots or roughly 13 times the density / resolution.
Dot sizes will be down in the 7 to 12 micron range. This is the maximum “young eye” resolution limit at normal viewing distances. The display will use silicon instead of glass as the backplane. RGB Subpixel will be OLED or u-LED, (more likely OLED in 2022).
Well there you have it. Will all of the above come true by 2022? Probably not, but much of it will. There are many very talented engineers around the world working on commercializing all the above-mentioned technologies.
Editor's Note: Thank you very much Guy for giving us some further insight what is going on from the engineering perspective in the imaging industry. Interesting!
More articles that might interest you