Solved – How/Why does resampling from “any” distribution lead to a normal distribution

I was performing some Monte-Carlos on historical data and irrespective of the distribution of the data I would always get a normal distribution owing to resampling with replacement. That made it easy for me to predict with 95% confidence what the expected value of that 'variable' would be.

So far so good and so cool! No matter what the historical distribution of the variable looked like, resampling and estimating future probability of occurrence always seemed to follow a normal distribution. Now, normal distribution is not so normal in practice. So what's the phenomenon that leads to a normal distribution? Is there a mathematical proof for it? I'm sure it has something to do with the central limit theorem but I'm quite baffled and intrigued at the beauty of producing a normal distribution when resampling with replacement.

I may be incorrect but is this true in general? Irrespective of my historical distribution (whether, beta, poisson, binomial, random etc.) I keep getting a normal distribution on resampling. Any help on the mathematics underpinning this phenomenon would be helpful.

The normal distribution comes up as the approximate distribution for averages and weighted averages. If you have a large sample from some distribution sampling with replacement from this large sample should only give back the original distribution. So if you did not start with a normal distribution you shouldn't be getting one back.

Similar Posts:

Rate this post

Leave a Comment