First Bayesian Inference: RShiny

Developed by Sonja Winter (version 1), Lion Behrens (version 2) and Rens van de Schoot

This Shiny App is designed to ease its users first contact with Bayesian statistical inference. By "pointing and clicking", the user can analyze the IQ-example as has been used in the easy-to-go introduction to Bayesian inference of van de Schoot et al. (2013).

We are continuously improving the tutorials so let me know if you discover mistakes, or if you have additional resources I can refer to. The source code is available via Github. If you want to be the first to be informed about updates, follow me on Twitter.

 

New in Version 2

With many thanks to Lion and Sonja, an update version has been uploaded to our OSF-page. New features include:

  • No need to upload data anymore; data generation is done within the package. The user can specify the population values based on the IQ-example and exact data will be generated following the restrictions of the IQ-scale.
  • Estimation can be done with sigma known (analytic results) or unknown (using mcmc-sampling in JAGS).
  • More summary statistics are provided.
  • Error messages have been translated into understandable language (if you encounter any unclear message, let us know!)

The App

Click on the preview to open your interface!

Step 1: choose a type of distribution (i.e., uniform, truncated Normal) for the prior and fill in values for the hyperparameters.

Step 2: generate data following the IQ-example.

Step 3: let the software (analytically or via sampling using RJags) generate the posterior distribution.

Exercise

The goal of the exercise is to play around with data and priors to see how these influence the posterior.

a. Pretend you do not know nothing about IQ except that it cannot be smaller than 40 and that values larger than 180 are really impossible. Which prior will you choose?

b. Generate data for 22 individuals and run the Bayesian model (default option). Copy-paste your model specifications and plot to the table attached below.

c. Change the prior to a distribution which would make more sense for IQ: we know it cannot be smaller than 40 or larger than 180 AND it is expected to be normally distributed around 100 (=prior mean). But, how sure are you? Try values for the prior variance of 10 and 1. Notice the prior becomes more peaked the smaller your prior variance (higher certainty/precision). Run the the two models and write down the results. How would you describe the relation between your level of uncertainty and the posterior variance?

d. Now, re-run the model with a larger sample size (n=100). Copy-paste the results. How are the current results different from the results under ‘c’?

e. Repeat steps ‘c’ and ‘d’ but now for a different prior mean (assuming your prior knowledge conflicts with the data, let's say 90 and use n=22). Copy-paste your results. How did the new results differ when compared to the results with a ‘correct’ prior mean?

f. What happens if your prior mean is really far away from the data, let's say 70 with n=22. Note that this situation is really extreme and often in practice the prior is much closer to the data.

g. So far, we assumed the variance of IQ to be known, which is not realistic. Re-run models 'e' and 'f'' using the option 'Run with Sigma Unknown'. In the background the software JAGS estimates the mean of IQ and its variance using MCMC-sampling. The posterior is no longer analytically obtained but is approximated. As a result the posterior distribution looks wobbly and every time you hit the 'run'-button it will look a bit different (give it a try). Can you describe the difference between the previous results under 'f'' and the current results?  If you run the model again but with n=50 the posterior is again a compromise between the prior and the data. Can you explain?

 

Table: Comparing your Bayesian models

Reference

 

Winter, S. D., Behrens, L., & van de Schoot, R. (2018, June 27). First Bayesian Inference Shiny App [version 2.0]. Retrieved from osf.io/vg6bw

van de Schoot, R., Kaplan, D., Denissen, J., Asendorpf, J. B., Neyer, F. J. and van Aken, M. A.G. (2014), A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research. Child Dev, 85: 842–860. doi:10.1111/cdev.12169