Informative hypotheses: How to move beyond classical null hypothesis testing
Almost all researchers in psychology have specific expectations about their theories in the form of hypothesized order constraints between statistical parameters. For example: the mean of group 1 is larger than the mean of group 2 which in turn is larger than the mean of group 3. We call this an informative hypothesis because it contains information about the ordering of the means. Many researchers use traditional null-hypothesis significance testing (NSHT) to evaluate these informative hypotheses. Or they use model selection tools such as the AIC and BIC to select the best model. Most scientists, however, hardly acknowledge the fact that complex informed hypotheses cannot be properly evaluated by means of NHST or by standard model selection tools. In my project I demonstrate what goes wrong from a statistical and philosophical point of view. We also offer innovative solutions to these problems based on Bayesian model selection, a parametric bootstrap method in Mplus and a revision of some model selection criteria (e.g. the prior predictive DIC). The methods are introduced (in non-statistical terms) and its utility is illustrated by applying it to examples from psychology.