I see that the sensor in all 3 of these cameras is different, with the biggest difference seemingly coming from the D90 to the D7000.
It looks as if the sensor on the D7000 can handle almost 3 times the amount of colors then the D90 and D40. I just wanted to see if anyone had any idea how this translated as far as real world picture taking / image quality is concerned.
#2. "RE: d40 vs d90 vs d7000 sensor question" In response to Reply # 0
Port Charlotte, US
I believe you're referring to the 14-bit versus 12-bit color depth of the D7000 (and by the way, it's a whole bunch more than 3 times the color). Here's an article that talks about and shows 12-bit and 14-bit images side by side. Google 14-bit color. There are other excellent articles on 14-bit color.
#3. "RE: d40 vs d90 vs d7000 sensor question" In response to Reply # 0
I'm not sure that I can explain this well or that I should even try but... First, imagine going in the opposie direction - fewer bit rather than more. Take it to an extreme. Imagine that you have only 2 bits to play with. What would your image look like? Not very impressive is it? So I think we can agree that more bits are better than fewer. (You can do this in most software packages)
It is also important to note that the human eye has limited capability to distinguish between differing intensities and colors. Most cameras can capture way more information than the human eye can "see". So, you don't need to display all of the information you have because your viewer can't see it anyway. So why more bits?
Now the question becomes, is there a point at which adding more bits does not improve my image? My answer is that it depends! No surprise there as that seems to be the answer to a lot of questions.
If you are always taking photos of brightly lit scenes you may never see the difference. If there is always a lot of contrast in your images you may never see the difference. But suppose you photograph a subject that has very little contrast. A histogram would show a flat line with a narrow spike. The spike represents all of the information in your image. If you have low dynamic range (fewer bits) then all of your low contrast image may be represented by a single value - a spike in the histogram that has no slope or bell curve on the edges. Now increase the dynamic range (more bits) and take the same image. Without post-processing it may look the same. But now stretch the histogram and you will find that your narrow spike now has some width to it. It may actually present some shape other than just a sharp verticle spike. With the increased dynamic range you have been able to capture the subtle differences between very nearly identical colors or intensities. By spreading the histogram you can now display these subtle differences where the human eye can see them.
You might ask where this becomes important. I submit that any time you are photographing images where the information is concentrated in a very narrow range of values such as an image of a distant galaxy that this becomes very important.
Perhaps someone can explain this better than I can. If I have made wrong assumptions (very possible) please point them out.
As to the link to the article that showed photos of the egg and golf ball, both images were showing a very wide range from black to white. If you represented all black through medium grey by one bit and used the remaining bits to display the upper half of the histogram I think you might begin to see the difference betwen the two sensors.
#4. "RE: d40 vs d90 vs d7000 sensor question" In response to Reply # 3 Tue 21-Jun-11 06:31 PM by Crowndog
If I may chime in. In the perfect world if we had a perfect lens with a transfer characteristic that "what went in came out" we might have actual 14 bit information depth. This would be important in cases were we might play with color temperature and such and edited away at the color saturation and hue levels with our software. In other words we would need the data in order to explore those realms. However, in the REAL world our lenses are far from perfect but are getting better all the time. So, we might assume that behold Nikon comes out with a AF-RDCTP (Real Damn Close To Perfect) transfer function. With all that being said these camera bodies are not just used with our lenses. In medical and labratory use (not always with lenses by the way)and (ssshhh D.O.D.) filtering softwaare, analysis software will use this info, trust me. LIke pulling information from beneath cloud cover in a photo from far far away. To sum up, there are those that would wish for 16 or 18 bit depth.