Tag: Informative hypotheses

Bayesian evaluation of informative hypotheses in SEM using Mplus: A black bear story

Half in jest we use a story about a black bear to illustrate that there are some discrepancies between the formal use of the p-value and the way it is often used in practice. We argue that more can be learned from data by evaluating informative hypotheses, than by testing the traditional null hypothesis.

Bayesian Evaluation of Inequality-Constrained Hypotheses in SEM Models using Mplus

Researchers in the behavioral and social sciences often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model resulting in an informative hypothesis. The questions they would like an answer to are “Is the hypothesis Correct” or “Is the hypothesis incorrect”?

A prior predictive loss function for the evaluation of inequality constrained hypotheses

In many types of statistical modeling, inequality constraints are imposed between the parameters of interest. As we will show in this paper, the DIC (i.e., posterior Deviance Information Criterium as proposed as a Bayesian model selection tool by Spiegelhalter, Best, Carlin, & Van Der Linde, 2002) fails when comparing inequality constrained hypotheses.

An introduction to Bayesian model selection for evaluating informative hypotheses

Most researchers have specific expectations concerning their research questions. These may be derived from theory, empirical evidence, or both. Yet despite these expectations, most investigators still use null hypothesis testing to evaluate their data, that is, when analysing their data they ignore the expectations they have.

Directly evaluating expectations or testing the null hypothesis? Null hypothesis testing versus Bayesian model selection

Researchers in psychology have specific expectations about their theories. These are called informative hypothesis because they contain information about reality. Note that these hypotheses are not necessarily the same as the traditional null and alternative hypothesis.

Testing informative hypotheses in SEM increases power: An illustration contrasting classical hypothesis testing with a parametric bootstrap approach

In the present paper, the application of a parametric bootstrap procedure, as described by van de Schoot, Hoijtink, and Deković (2010), will be applied to demonstrate that a direct test of an informative hypothesis offers more informative results compared to testing traditional null hypotheses against catch-all rivals.