Or what conditions guarantee that?
In general (and not only normal and binomial models) I suppose the main reason that broke this claim is that there's inconsistency between the sampling model and the prior, but what else?
I'm starting with this topic, so I really appreciate easy examples
Best Answer
Since the posterior and prior variances on $theta$ satisfy (with $X$ denoting the sample) $$text{var}(theta) = mathbb{E}[text{var}(theta|X)]+text{var}(mathbb{E}[theta|X])$$ assuming all quantities exist, you can expect the posterior variance to be smaller on average (in $X$). This is in particular the case when the posterior variance is constant in $X$. But, as shown by the other answer, there may be realisations of the posterior variance that are larger, since the result only holds in expectation.
To quote from Andrew Gelman,
We consider this in chapter 2 in Bayesian Data Analysis, I think in a couple of the homework problems. The short answer is that, in expectation, the posterior variance decreases as you get more information, but, depending on the model, in particular cases the variance can increase. For some models such as the normal and binomial, the posterior variance can only decrease. But consider the t model with low degrees of freedom (which can be interpreted as a mixture of normals with common mean and different variances). if you observe an extreme value, that’s evidence that the variance is high, and indeed your posterior variance can go up.
Similar Posts:
- Solved – Different notation for Bayes’ prior and posterior distributions
- Solved – Different notation for Bayes’ prior and posterior distributions
- Solved – Can a posterior expectation be used as a approximate for the true (prior) expectation
- Solved – Is the mean (Bayesian) posterior estimate of $theta$ a (Frequentist) unbiased estimator of $theta$
- Solved – Variance of Bayesian posterior