Over a decade of research on the topic of informative hypotheses has resulted in a bunch of methodological, tutorial, and applied papers, books (e.g., Hoijtink, 2012), and several software packages (see informativehypotheses.sites.uu.nl). Informative hypotheses with simple order constraints, such as Hi : µ1 > µ2 > µ3, Hi : µ1 > (µ2, µ3) have proven their use: not only do these hypotheses represent researchers’ expectations better, they also have more statistical power than classical hypotheses (Vanbrabant et al., 2015). The Bayes Factor (BF) is a Bayesian measure of support used for model selection (Kass and raftery, 1995). The BF can also be used for the evaluation of informative hypotheses. Using the BF, the support in the data for an informative hypothesis (Hi) versus the unconstrained alternative (Hu) can be calculated. As was shown by Klugkist et al. (2005): BFHi,Hu = fi ci , where fi denotes fit for the hypothesis Hi , and ci denotes complexity for the hypothesis Hi . fi is a measure of agreement between the data and Hi , it is the posterior probability of the hypothesis given the data. fi can only be calculated after the data is observed. ci refers to the proportion of the parameter space in agreement with the hypothesis by chance: the a priori probability of the hypothesis. Complexity can be calculated before conducting the analysis.

Zondervan-Zwijnenburg, M. A. J., van de Schoot, R., & Johnson, A. R. (2017, April 19). Complexity [Computing complexity for the Bayes Factor in inequality constrained hypotheses] (White paper). http://dx.doi.org/10.17605/OSF.IO/5YT3J

PhD Student

In her PhD project, Mariëlle focuses on including prior knowledge in statistical analyses (informative Bayesian research) and confronting prior knowledge with new data.

Alan Johnson