Solved – Use of prior and posterior predictive distributions

I understand the prior and posterior distributions and I have read what the prior and posterior predictive distributions are.

However, I don't really see the point of knowing them.

Knowing more things wouldn't hurt, but I just want to understand the reason why I need to know them.

Some uses of the posterior predictive:

  • Simulating future data based on your model assumptions and data observed to this point. This is useful for predictions, forecasting, etc.
  • Model checking via posterior predictive checking. Some comments have directed you to Bayesian Data Analysis, and its author has made a relevant chapter available. Tim's answer to this question should also prove helpful.

I've less help to offer on the prior predictive. I've found it useful as a sort of summary check on my combined priors: It can serve as an intuitive summary of your ultimate prior assumptions on expected data.

In a similar vein, some view it as a tool to arrive at informative priors. Consider this correspondence shared on Andrew Gelman's blog:

I don’t ever see parameters. Some model have few and some have hundreds. Instead, I see data. So I don’t know how to have an opinion on parameters themselves. Rather I think it far more natural to have opinions on the behavior of models. The prior predictive density is a good and sensible notion.

A further post continues:

The goal is to use the “black box” of the prior predictive density and the prior conditional density (the conditional in particular since you can look at model behaviour in a dynamic, scenario based setting) to inform us about how the informative priors should be constrained.

Put another way, if you're struggling to set prior parameters, you may find it sensible to examine those parameters' consequences on expected data. Doing so requires the prior predictive.

Similar Posts:

Rate this post

Leave a Comment