I personally agree that larger pixels are usually better.
But, try to convince most D1X owners. They seem to think that the D1X is many times better than the D1 in picture quality.
DP review and many others called it interpolation.
"The D1X's pixel grid layout is rectangular rather than square (though still uses the Bayer GRGB colour filter array), in camera processing turns the 4028 x 1324 raw pixels (5.33 megapixel) into a 3008 x 1960 pixel image (5.9 megapixel). While it's clear that some interpolation is being carried out in the vertical direction (to get from 1324 rows to 1960 rows) there is also compression in the horizontal direction (reducing from 4028 to 3008 columns), this compression is used to add detail to the vertical data. Nikon argue that because the input and output resolution are almost identical no image degradation will be visible."
Yes, actually, both. The sensor is pixel binned; two photosites are binned together. The resulting data is extrapolated in one direction and interpolated in the other; probably best to call that "resampled".
>I personally agree that larger pixels are usually better. > >But, try to convince most D1X owners. They seem to think that >the D1X is many times better than the D1 in picture quality.
And they are 100% right - because the photosites of D1/D1H are NOT bigger then those of D1X. They are the same size (as far as the SAME sensor is used in all three cameras) - but used (D1/D1H) in groups of FOUR to produce a single pixel - and in groups of only TWO (with further interpolation) in D1X...
>So, if the D1X was so great, why don't they try the same >thing with the D2X and D3 sensors?
Early generation (D1/D1H) used - cropped (16*24 mm) sensor with 11 mio SMALL elements, of which every FOUR used to produce ONE SINGLE pixel (just because the brains were not effective enough to work with every photosite separately).
Next generation (D1X) still used the SAME sensor, BUT the faster brains managed to work with twice as many groups of TWO elements only respectively (thus imroving the image quality).
Further generation - D2X/D2Xs - used an improved (but still cropped) 12 mio small elements sensor - and faster brains WITHOUT need for any more grouping/interpolating - producing a pixel from each photosite.
The same generation D2H, on the contrary, used (on the same croooed field) only 4 millon, but LARGE elements (something won, something lost).
And current D3 uses... as many as 12 millon (as D2X), but lagre (as D2H) elements - plus on a FULL frame - thus combining the advantages of D2X and D2H (along with its own benefits).