I have found that the D800 is such a different animal that what has come before that most of the habits and rules of thumb can be explored to see if they help or hinder, whereas when cameras had 8-9 stops DR on a good day, the 8 bit per channel JPG depth was good enough to describe the data well enough. It isn't with cameras capturing a linear range of 14 stops. Experimenting and finding what works best is going to generate a new generation of rules of thumb. We are in the initial phases of that now. By visualizing what the sensors that detect the values that the indicators or displays actually tell you, it is easier to make adjustments on the fly. For example, all the complaints of "over exposing" on some models, where rules of thumb to dial in universal -.7 EC for example suggests the person is taking the readings as some absolute when it is not. The meter is referenced to 18% mid tone and knowing the pattern, biases and resolving power of the metering sensor lets us use the metering data in its proper context of what IT sees, not the total scene. The same misunderstanding of AF is based on the assumption is that the focusing sensor is detecting targets as we intend, not as what the sensor can and can't see or resolve. Cameras are so good that we expect them to mimic how we detect focus or tone range, which it can't because the sensors and processes are based on different models.
We use a lot of processing power in our brains to handle extended depth of field or tone range by using scanning and memory to create an impression of a composite. Our impression of a scene being razor sharp to our eyes at infinity and the immediate foreground and all inbetween is a brain generated illusion. We scan and sample at various depths and build a composite of what we believe. We are not resolving with acuity of the far distance and near field at the same time, but stacked "captures" that are used to build the image we "see". Same with wide tone range. With all our senses, what we are conscious of is highly processed by our brains using clues from fairly low res detectors. If we were conscious of the information detected by our ears, we would not understand the person talking to us 10 feet way, any information would be lost in a sea of echo and reflected sound. If we "felt" what really was touching us, we would be distracted into inaction but the brain filters out the 99.999% of static information and makes us conscious of only the change in pressure of some of the reporting skins sensors. The same with vision and how different cameras and vision are even if cameras and eyes are pretty good analogies of each other. Eyes and vision are very different things. By thinking of how the camera sees the world, instead of just how we believe it should, we are in much better position to make appropriate adjustments on the fly without needing to apply inappropriate rules of thumb.