Dealing with imperfect elicitation results
We provide an overview of the solutions we used for dealing with imperfect elicitation results, so that others can benefit from our experience. We present information about the nature of our project, the reasons for the imperfect results, and how we resolved these sup-ported by annotated R-syntax
Choice of Distance Measure Influences the Detection of Prior-Data Conflict
The present paper contrasts two related criteria for the evaluation of prior-data conflict: the Data Agreement Criterion (DAC; Bousquet, 2008) and the criterion of Nott et al. (2016). We investigated how the choice of a specific distance measure influences the detection of prior-data conflict.
Development and Evaluation of a Digital Expert Elicitation Method
The paper describes an elicitation procedure which opens up many opportunities to investigate tacit differences between “subjective” teacher judgments and “objective” data such as test results and to raise the diagnostic competence of teachers.
Testing Small Variance Priors Using Prior-Posterior Predictive P-values
Muthen and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen’s (1994) “The earth is round (p < .05)”, which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection.
Using the Data Agreement Criterion to Rank Experts’ Beliefs
We evaluated priors based on expert knowledge by extending an existing prior-data (dis)agreement measure, the Data Agreement Criterion, and compare this approach to using Bayes factors to assess prior specification.
Bayes with Informed Priors Based on Literature and Expert Elicitation
Bayesian Trajectory Analysis with Informed Priors Based on a Systematic Literature Search and Expert Elicitation in the field of Post Traumatic Stress.
A Five-Step Method to Elicit Expert Judgment
Proposal for a Five-Step Method to Elicit Expert Judgement
Bayesian PTSD-Trajectory Analysis with Informed Priors
we illustrate how to obtain background information using previous literature in the field of PTSD based on a systematic literature search and by using expert knowledge. Finally, we show how to translate this knowledge into prior distributions and we illustrate how to run a Bayesian LGMM.
Applying guidelines to construct informative priors in small sample research
The current paper demonstrates the usefulness of Bayesian estimation with small samples. In Bayesian estimation, prior information can be included …
First EDDA meeting
The very first meeting of the EDDA Working Group is organized on June 26 2017. The aim of the Working Group Meeting is to bring together researchers working in the field of Experts Data (Dis)Agreement and to share information, learn about new developments and discuss…
A Systematic Review of Bayesian Papers in Psychology: The Last 25 Years
Although the statistical tools most often used by researchers in the field of psychology over the last 25 years are based on frequentist statistics, it is often claimed that the alternative Bayesian approach to statistics is gaining in popularity.
The GRoLTS-Checklist: Guidelines for Reporting on Latent Trajectory Studies
Estimating models within the mixture model framework, like latent growth mixture modeling (LGMM) or latent class growth analysis (LCGA), involves making various decisions throughout the estimation process. This has led to a wide variety in how results of latent trajectory analysis are reported.