Sign up Login
Home Forums Articles Galleries Members Galleries Master Your Vision Galleries 5Contest Categories 5Winners Galleries 5ANPAT Galleries 5 The Winners Editor's Choice Portfolios Recent Photos Search Contest Info Help News Newsletter Join us Renew Membership About us Retrieve password Contact us Contests Vouchers Wiki Apps THE NIKONIAN™ For the press Fundraising Search Help!

Camera Reviews Lens Reviews Accessories Reviews

Year 2022: Digital Camera Specifications

guy eristoff (geristoff)

Keywords: sensor, cmos, future, japan, manufacturing

What will 35mm Full Frame and APSC format camera figures of merit be in 2022?

I am kind of lucky as a hobby photographer. I live in Japan, and my day job is working in the semiconductor industry.  One of the main products the company I work for makes are CMOS Image Sensors for multiple customers.  Consequently, I wanted to share with you a little non-technical peek at some of the advances in cameras and lenses that are coming up in the next several years.


An image sensor. Courtesy of Sony Corp. Japan

I will break this down into CMOS Image Sensors, Image Processing Integrated Circuits, and a smaller amount on lenses and other camera subsystems.  Much of this material can be found on the web – especially if you look at patent filings by the major camera companies.

The Image Sensors

CMOS Image Sensors - the heart of the camera.  They are the single most important component in the camera and can cost ~$800 to $1,000 USD on a high end full frame camera.  Pixel sizes are settling down to be between 3.2μm to 5.5μm in pitch, with the sweet spot between resolution and photon capturing capability being about 4.3μm.  This equates to a range of ~ 30MP to 60MP for Full Frame and 20MP to 28MP for APSC cameras.  Most sensors made with pixel pitches below 4.8μm will be Backside Illumination, with most sensors 5.0μm or higher being Frontside Illumination. Sensor dynamic range will increase ~1.5 stops in the next 3 years to about 16.0 to 16.5 stops. Frame rates for 24MP sensors will be 60 Frames per Second (FPS), with higher end models reaching 120 (FPS).  For 35MP and above sensors, 30 to 60 (FPS) will be the range.

Why not 200MP DSLRs?  While there are currently 160MP APSC sensors and 250MP full frame sensors available, they are used for machine vision applications.  The issues with using these very high pixel count sensors on DSLRs or Mirrorless cameras are many.  The frame rate, (frames per second) is too low because of the huge amount of signal transfer required to get all that data out of the sensor, the signal processing requirement is immense, and therefore there are significant power consumption and temperature generation issues that cannot be easily overcome in a portable DSLR. Lastly, the pixel pitch must be decreased to below 2μm, (similar to many cellular phone cameras).  Dynamic range and low light sensitivity are significantly impacted at these pixel pitches.

Noise and stacking the sensors

Total image noise will drop to a little more than half of what it is today for high end sensors by a variety of techniques that are beyond the scope of this short article.  However, suffice it to say that these noise reductions will significantly aide in low light photography.  With such high frame and data rates, stacked CMOS Image Sensors will become the norm with backside illumination Image Sensors.  The stacked sensor will consist of an Image Sensor, a DRAM-like memory buffer integrated circuit and an Image Signal Processing integrated circuit being merged together in a three-chip stack.

The stack will be connected by Copper electrical stud connectors approximately 800 times finer than a strand of human hair.  Note – this stack is not the same as the Sigma Foveon sensor, where each pixel is vertically “stacked” to record Blue, Green and Red photons at sequentially deeper zones in the pixel.  This stacked sensor architecture will be needed for the high frame rates, along with the reduction in total sensor noise.

Silicon is still the standard

Sensors will be predominantly Silicon based but will continue to employ newer pixel reflection techniques such as ultra-small metal mirrors and different refractive index material layers in order to maximize the number of photons that interact with the Silicon or other charge-producing material employed in the pixel.  The added benefit here is that this will allow less photons to “leak” into adjacent pixels.  This is important because pixels employ Red, Blue and Green color filters.  Allowing one pixel to leak too many photons into another adjacent pixel produces false readings on colors, leading to less contrast in the image, and less perceived resolution, (lower sensor MTF).

Organic is coming

Finally, Organic Image Sensors will make their debut in 2020, but only for broadcasting at first due to higher dark current resulting in lower signal to noise ratios. You can think of these sensors as they are the inverse of Active Matrix OLED displays that you are used to looking at on your cellular phones, (absorbing photons instead of emitting them).  The material used to produce organic sensors is also somewhat similar to OLEDs, but the precise chemistry is different. Their merit is that have very large quantum efficiency and full well capacity with respect to today’s Si-based CIS sensors, and the sensor sensitivity can be easily modulated by an applied voltage to transparent electrodes located above, and reflective metal electrodes located below the organic compound.  Therefore, an extremely high dynamic range is possible in a sensor that is less expensive to produce than Silicon-based Backside Illumination sensors.

Signal Processing

Image Signal Processing Integrated Circuits (ISP ICs) are the brains of the camera.  An often overlooked yet fundamentally critical component of any DSLR is the ISP, (for Nikon these are the “Expeed” series of integrated circuit chips touted in your camera specification sheets).  The ISPs function is to manipulate the pixel data that comes out of the CMOS Image Sensor and transform it into a useful format that can be written on a non-volatile memory card.  Let’s take a step back and understand one level deeper what is going on here.

Your Nikon, (or other non-Foveon) sensor is essentially a 2-dimentional array of pixels that collect photons of light that interact with the semiconductor material in the sensor through creation and collection of electron charge in each Red, Green or Blue pixel. Note – this excludes a small number of Phase Detection Auto-Focus pixels and several other reference pixels used for other purposes. The exposure process takes place during the time governed by the shutter speed that you define for the photograph. In each pixel there is a light to charge generation region, (the photodiode), a charge collection region that is attached to 4 or more readout circuit transistors, (usually 5 to 6 in today’s sensors) and in most cases, some intermediate charge storage region called a “floating diffusion”.

The output from the pixel readout circuitry is a signal that travels to the sensor row and column peripheral circuitry for further transformations such as amplification, column correlated double sampling for noise reduction, and subsequently signal digitization for Nikon and several other brand sensors.  These “transformed signals” then feed into your ISP.

Signal conversion in ISP, or not?

One should note that Nikon transforms the pixel charges into a digital format on the sensor peripheral area using an Analog to Digital Converter, (ADC) while Canon choses to do this later while in the ISP. Both techniques have their merits and demerits, but it is generally accepted that digitizing the signal as close to the pixel array as possible maintains superior fidelity.  The key points here are to transfer the data rapidly and without distorting the signal parameters before they enter the ISP.

The ISP in detail

So – what does the ISP do? That depends on somewhat on you, and your preference for file type. The main operations an ISP performs are manipulation of the “image signals” into the format and specific coefficients that you select.  There is some input buffer memory to allow high frame capture rate while not bottlenecking the data as it moves into the ISP. One of the functions is to perform pixel interpolation to achieve full pixel resolution.

Nikon sensors employ a Bayer-type Color Filter pattern with a “Red/ Green/ Blue/ Green” repeating square pattern.  In a Bayer pattern sensor, ¼ of the pixels are Red, ¼ are Blue, and ½ are Green.  By comparing neighbor pixel measurements, each pixel then represented as a blend of the pixel I question and components of the nearest neighbors.  This gives you a full resolution sensor with the color spectrum defined by a bit count. Some other signal adjustments are also made.

If you photograph in RAW image format, the output is then quantized into RAW writeable format, and is sent to the camera’s memory card for storage along with the EXIF data.  It you photograph in *.jpg format, then many other transform functions are performed as well based on your desired *.jpg in-camera settings, (contrast, saturation, etc.) along with firmware coded transforms from the camera manufactured. After these image manipulations are completed, again the output file is written to the camera’s memory card.

Digital data exiting the ISP in RAW, *.jpg or other formats are written on ultra-high write speed non-volatile memory card, (that will have a transfer speed of > 500MB/sec in the year 2022).  All of this takes a lot of power at high pixel counts and high frame rates. Therefore, the ISPs will be fabricated at 7nm to 16nm or processing nodes, (Trigate / Finfet architecture uses transistors approximately 10,000 times thinner than a human hair) with a balance between low leakage and high performance.

For reference, today’s ISPs are processed at about 28nm technology.  Color range will most likely remain at 14 Bit in 2022, (14 Bit is calculated by the number 2 multiplied by itself a total of 14 times = 16 384 colors).   It will likely not increase more than 1 or 2 bits from today’s levels because the data handling requirements double with each additional bit.

Power Supply

On a related note, Lithium Ion Batteries will employ higher power storage density techniques using more exotic dopants, (small quantities of different elements such as Manganese or other transition metals on the periodic table).  These will be used to enhance the overall charge-holding or charge transfer capability, or ensure electrodes are not as easily coated with undesirable material that diminishes battery efficiency, and / or to enhance the quick charge characteristics of depleted batteries.  This will increase the number of shots per charge of a battery by about 25% to 30% over the current performance with the current frame rate and data amounts in production today, and will decrease charge times by about the same percentage or even more, (perhaps 35% to 45%).

However almost all gains in battery technology shot count between charges will be erased by higher frame rates, larger data transfer volumes, always on sensors (mirrorless) and the associated higher power requirements.  For clarity, with properly defined battery manufacturing tolerance specifications and a rigorous outgoing quality inspection processes, there should be basically no danger of fire from these enhanced batteries if they are used properly.

Finally, Hydrogen Peroxide-based or similar type chemical fuel cell energy storage devices will probably not be available for the for the next 8 to 10 years.


Lenses are the eyes of the camera.  Lens technology will continue to improve as newer low dispersion and higher Refractive Index glass formulations are developed.  This has already been realized with the Zeiss announcement of Aluminum Oxynitride lens elements several months ago.

Lenses will get lighter, but this will be offset by more complex / highly corrected optical solutions. Fresnel lenses, (lenses with circumferential micro grooves) for one more of the elements will become more commonplace.

More complex and effective multi-layer optical coatings will be employed to reduce optical losses in the image path.  Perhaps a “metamaterial” lens element will be introduced into lenses.  These metamaterials elements can provide negative index of refraction, that will significantly enhance the optical properties of the lens cell at very low weight through a process of semiconductor-type 3d sub-micron “features” etched on a planar glass blank.

Many challenges remain to fabricate apochromatic metamaterials lenses, and introduction of these semiconductor processing patterned electromagnetic waveguide elements may not come out for another few years after 2022. Of course, there will be less brass metal employed in the lens cell, and more high-density plastics used to reduce weight.


Currently the display technology is ~ 2.3M “dots” for a 3.2 inch (82mm) diagonal display, (W x H: 2.7 x 1.8 inch, 68.6mm x 45.8mm).  In 2022 this same sized display will have approximately 30M dots or roughly 13 times the density / resolution.

Dot sizes will be down in the 7 to 12 micron range.  This is the maximum “young eye” resolution limit at normal viewing distances. The display will use silicon instead of glass as the backplane.  RGB Subpixel will be OLED or u-LED, (more likely OLED in 2022).

Well there you have it.  Will all of the above come true by 2022?  Probably not, but much of it will.  There are many very talented engineers around the world working on commercializing all the above-mentioned technologies. 

Editor's Note: Thank you very much Guy for giving us some further insight what is going on from the engineering perspective in the imaging industry. Interesting!

(37 Votes )

Originally written on December 4, 2018

Last updated on December 4, 2018


Steven Schiff (theropod) on March 8, 2019

An unusually informative and well-written article. Thanks!

Dan Victory (Tekkie) on December 31, 2018

A really terrific read. Thank you. It looks like technology will make things easier for novices but more challenging for pros. Upgrade this, tweak that, compatibility with this, dump that........and upgrade that computer while you are at it. A bit like the printing industry I am from. The long and winding technology road. ;) The price of better imaging though.

John D. Roach (jdroach) on December 29, 2018

Fellow Ribbon awarded. John exhibits true Nikonian spirit by frequently posting images and requesting comments and critique, which he graciously accepts. He is an inspiration to all of us through constant improvement in his own work, keen observations and excellent commentary on images posted by others. Donor Ribbon. Awarded for his very generous support to the Fundraising Campaign 2014 Donor Ribbon awarded for his most generous support to the Fundraising Campaign 2015 Ribbon awarded for his generous support to the Fundraising Campaign 2017 Ribbon awarded for his generous contribution to the 2019 Fundraising campaign Awarded for winning in The Best of Nikonians 2019 Photo Contest

Very interesting article. Thanks for sharing this with us.

guy eristoff (geristoff) on December 11, 2018

Hi Folks, I have been answering multiple questions / comments individually, but the comments by David Goldstein and Gary Harvey should be addressed in a more open format as they are more general comments that affect all of us. I will attempt to address my views, (which I think are relatively well aligned to Dave and Gary) on cellular cameras and other ILCs, (DSLRs & Mirrorless). First - please remember that the D750 mentioned below is a "modern DSLR" with the sensor being made by a very reputable team. It is not that old, and the sensor specifications are quite good. In fact I would buy a D750 now for the right price, but am instead planning to hold out and pick up a D760 next year. Secondly, there are fundamental limitations to cellular cameras. The most critical one from my perspective is the tradeoff in pixel pitch (or resolution) vs dynamic range constrained by lens optical stack z height that conforms to the phone supplier's form factor. For "fair weather" and "sunny 16" photographers, a modern cellular camera with ~20 MP sensor and 1.45um pixels will deliver a nice, very usable good dynamic range photograph. But, the same sensor in a low illumination situation is something very, very different. Sure, with 60 or more, (120, 240, etc.) FPS , frame stacking is a viable technology, but the ISP and data buffers will require a lot of battery power. Also, stacking must be done at much higher speeds than 60 FPS to achieve crisp images for even moderately moving subjects. Therefore for any kind of sub-optimal lighting, the pixel pitch needs to be larger and hence the resolution lower to achieve a nice dynamic range for a cellular phone. Third - from a lens maker's perspective, it is impossible to get the level of aberration reduction from a small cellular lens stack that it is to get from a large prime lens for an ILC. I agree that mirrorless ILCs are a step towards a "minimalist, full feature camera" that many of us including I looking for. But I suggest that FF mirrorless ILCs need to go through one more generational release before they are really ready for prime time. I think the second crop of mirrorless cameras coming out in the next wave, (2021) will be real enablers for folks such as myself. Finally please remember that these "advanced offerings" I talk about will be commonplace in several years. Actually my personal gear philosophy is to buy the best lenses I can afford, (mainly primes except at the low and high focal length extremes) and use them "forever". The body can be upgrades every 6 or so years, (2-3 generations) to stay on the technology curve. Cheers, guy

User on December 10, 2018

Thank you for the great article Guy so very informative .. Myself being in the electronic service business for over 40 years really enjoy reading articles like what you have posted. Cheers Kip..

User on December 10, 2018

What a fascinating read! Thanks so much for sharing that...

Ravi Subrahmanyan (nikonzen) on December 10, 2018

Thank you for the great article Guy. It is rare to be able to get such a glimpse into the future.

Gary Harvey (pocketchange) on December 9, 2018

Thank You for this article.. Your vision for the evolution of the sensor is interesting but I stand with Mr. Goldstein. After a number of my colleagues were decent enough to loan me a variety of the newer FX DSLRs and having compared them to my D750, I'm hard pressed to come up with a need to acquire any of these advanced offering. Shooting RAW with my old Nikon has better results. Looking forward to the numbers game with sensor technology will be interesting Results from my iPhone 6s were only improved with adding a gimbal. Hard to believe this model Apple iPhone. I'm sure I'll be in the market for a new piece of gear but that time is a long, long way in the future (maybe in my next life.)

David Goldstein (dagoldst) on December 8, 2018

I think the focus on sensors is actually going to be over very soon. I think computational photography will wind up more and more in the cameras that we use everyday just like what's happening in cell phone cameras from Pixel and iPhone.the ability to generate better images by stacking inside the camera is perfectly feasible especially with the mirror assembly being removed from cameras today.I think we should think about photography in a different way. Just a different opinion.

Mike Kirtley (mzkirtley) on December 7, 2018

A really interesting article, thanks for posting it. Makes me want to hang on to my current camera for another 5 years until the new technology becomes available :-) One interesting point is using OLED displays. I have a Pure Evoke DAB radio which is about 6-7 years old and the OLED display on that has become completely dim and unreadable. After a web search I found this was a common problem with those radios and was caused by the deterioration of the organic element of the display. At least, that's the only reference to the cause I've found so far. It is possible to replace the display so I'm doing that. However, does anyone know if this a general problem with OLED displays or just the Pure Evoke versions ? If a general problem with OLED there will be a lot of people very upset 6-7 years after buying something with that type of display and it suddenly stops being usable. In the UK they are advertising OLED TVs and I bet they don't say anything about potential screen life !

Phil Harvey (boardhead) on December 6, 2018

Just one comment. You said "It[sic] you photograph in *.jpg format, then many other transform functions are performed as well", but these transformations are also necessary when writing RAW formats because most RAW formats (including CR2, CR2 and NEF) contain embedded JPEG-format preview images.

John Hernlund (Tokyo_John) on December 6, 2018

Great post Guy, thanks for taking the time to share it with us! I'm curious to know what is the primary improvement to achieve ~2X gains in noise? Is it better sensor micro-arrays? Also, do you know what is the theoretical limit (i.e., purely photon shot noise) relative to present technology?

Gary Gant (MDGx) on December 6, 2018

This just in... Meta-surface corrects for chromatic aberrations across all kinds of lenses: Many thanks for your excellent in-depth article.

David Hyman (Ninereight) on December 4, 2018

Super article. When will Nikon ever include GPS metadata in their products?