Statistical errors come in two varieties that relate to statistical hypothesis testing. We start our discussion with two hypotheses: denoted by Ho and Ha. Ho is called the null hypothesis, and Ha is called the alternative hypothesis. Ho is the more fundamental of the two, but only because the common probability measures and significance levels are calculated assuming Ho is true. For example, the Type I error is made when the rejection rule indicates that Ho is rejected when Ho is true. Therefore, the probability of a Type I error is calculated assuming Ho is true, and given the rejection rule. More generally, the rejection rule is fashioned to return the desired Type I error rate, and this fashioning can even become self-serving, as we will see.

Nevertheless, we could argue that Ho is not more fundamental than Ha, because in the real world the statistician must worry about the possibility that Ho is false, and Ha is true. The Type II error is made when the rejection rule indicates that Ho is accepted, when in fact Ho is false and Ha is true. Therefore, the calculation of the probability of making a Type II error depends on the distributional assumptions that come from Ha, and not Ho. In general, there are many ways for Ho to be false, and only one way for Ho to be true, and so calculation of the probability of Type II error is more complicated and more open ended. In general, Ha may hold more than one statistical model, so it can be a collective that hold all possible alternatives to Ho. In general, there are many more ways to fall into error when our assumed model is wrong rather than right, and these errors represent Type II error. Therefore, the Type II error is a slippery slope, because there is no way to nominate a statistical model to the state of statistical purity in advance.

Now here is the interesting observation: the statical model (or theory) is better described as someone`s pet peeve that carries its own emotional attachment. And so the proclivity to fall into Type II error is partly emotional, and as such the tendency to fall into error cannot be quantified by just probability. The scientist loves to calculate the probability of Type I error (i.e., fashion his rejection rule that may be self-serving), but the chore to calculate the probability of Type II error may meet resistence. And the calibration of fit that happily protects against the Type I error (by building the rejection rule), need not be quantitative. The happy calibration may be qualitative, and give itself over to hand waving. Only when the topic turns to Type II error is the happiness replaced by possible anxiety. In worst case, flipping a coin on the side may become the rejection rule with the sought probability of Type I error, but this exercise may be unrelated to the merits of Ho.

What better example of the emotional attachment to theory than that provided by Darwinian belief in evolution by natural selection? The afflicted are good about fitting the fossil record to their pet peeve. The Type I error of rejecting the theory, when it is assumed true, is almost always evaluated given rationalization that show the effectiveness of a hypothetical commutative selection in its navigation of the fitness landscape, ending in adapted life in all its complex forms. But nowhere is the Type II error of accepting the Darwinian model, when it is false, evaluated with such zeal. Only away from the reach of scientism, with those eager to study intelligent design, is the possibility considered.

The tendency to look at Type I error, and only Type I error, becomes one sided ideology as it departs further from the considerations of statistical hypothesis testing. Statistics always had room to consider Type II error, but scientism has no room. In the extreme case, scientism is reduced to flipping a coin on the side, and not looking critically at the fit to real offerings of evidence.

http://www.youtube.com/watch?v=js8YE7uZFUY