Over a decade of research on the topic of informative hypotheses has resulted in a bunch of methodological, tutorial, and applied papers, books (e.g., Hoijtink, 2012), and several software packages (see informativehypotheses.sites.uu.nl). Informative hypotheses with simple order constraints, such as Hi : µ1 > µ2 > µ3, Hi : µ1 > (µ2, µ3) have proven their use: not only do these hypotheses represent researchers’ expectations better, they also have more statistical power than classical hypotheses (Vanbrabant et al., 2015). The Bayes Factor (BF) is a Bayesian measure of support used for model selection (Kass and raftery, 1995). The BF can also be used for the evaluation of informative hypotheses. Using the BF, the support in the data for an informative hypothesis (Hi) versus the unconstrained alternative (Hu) can be calculated. As was shown by Klugkist et al. (2005): BFHi,Hu = fi ci , where fi denotes fit for the hypothesis Hi , and ci denotes complexity for the hypothesis Hi . fi is a measure of agreement between the data and Hi , it is the posterior probability of the hypothesis given the data. fi can only be calculated after the data is observed. ci refers to the proportion of the parameter space in agreement with the hypothesis by chance: the a priori probability of the hypothesis. Complexity can be calculated before conducting the analysis.

(Fulltext available)

Zondervan-Zwijnenburg, M. A. J., Van de Schoot, R., Johnson, A. R. (2017). Computing complexity for the Bayes Factor in inequality constrained hypotheses. doi: 10.17605/OSF.IO/5YT3J