# Solved – For linear regression, what’s the distribution of error term from Classical and Bayesian point of views

I know that linear regression is based on the assumption that the errors are normally distributed (from both bayesian and classical views). I'm just trying to verify this assumption based on the final model.

Assume I've got 3 normal random variables x1, x2, x3. I can regress (linearly) x1 on x2, x3 and get a linear regression model of the form:

x1 = b0 + b1*x2 + b2*x3 + e.

Reorganizing, e = x1 – b0 – b1*x2 – b2*x3.

Here, if I estimate bi's using least squares method, I can say 'e' is a linear combination of normally distributed variables, so it's normally distributed.

But if I estimate bi's using a Bayesian method and assume bi's also follow normal distribution, then 'e' is no longer a linear combination of normal random variables. Effectively, it's sum of normally (b0) and product-normally (b1*x2, b2*x3) distributed variables, and product-normal distribution is not normal in general. (http://math.stackexchange.com/questions/101062/is-the-product-of-two-gaussian-random-variables-also-a-gaussian)

Is there anything improper in above reasoning? How else can I try to validate the normal assumption for the error term?

Contents