I was performing some Monte-Carlos on historical data and irrespective of the distribution of the data I would always get a normal distribution owing to resampling with replacement. That made it easy for me to predict with 95% confidence what the expected value of that 'variable' would be.

So far so good and so cool! No matter what the historical distribution of the variable looked like, resampling and estimating future probability of occurrence always seemed to follow a normal distribution. Now, normal distribution is not so normal in practice. So what's the phenomenon that leads to a normal distribution? Is there a mathematical proof for it? I'm sure it has something to do with the central limit theorem but I'm quite baffled and intrigued at the beauty of producing a normal distribution when resampling with replacement.

I may be incorrect but is this true in general? Irrespective of my historical distribution (whether, beta, poisson, binomial, random etc.) I keep getting a normal distribution on resampling. Any help on the mathematics underpinning this phenomenon would be helpful.

**Contents**hide

#### Best Answer

The normal distribution comes up as the approximate distribution for averages and weighted averages. If you have a large sample from some distribution sampling with replacement from this large sample should only give back the original distribution. So if you did not start with a normal distribution you shouldn't be getting one back.

### Similar Posts:

- Solved – what is the effect of bootstap resampling in bagging algorithm(ensemble learning)
- Solved – what is the effect of bootstap resampling in bagging algorithm(ensemble learning)
- Solved – Do we draw samples with or without replacement when we state the central limit theorem
- Solved – Can bootstrap resampling be used to calculate a confidence interval for the variance of a dataset
- Solved – How to calculate sample size so I can be confident that the sample mean approximates the population mean