Does absence of endogeneity problem means absence of heteroscedasticity?
For example, if an estimator is designed to correct for endogeneity does this mean that heteroscedaticity will be corrected for, too?
Contents
hide
Best Answer
No, not at all. Endogeneity is a first-moment problem, while heteroskedasticity is a second-moment problem.
For example, consider a linear model
$$ y_i=x_ibeta+u_i $$ An assumption like $$E(u_i|x_i)=0$$ implies no correlation of error term and regressor, so no endogeneity. No (conditional) heteroskedasticity in turn can be written as $$Var(u_i|x_i)=sigma^2,$$ where $sigma^2$ is a constant number.
There is no reason whatsoever that $$E(u_i|x_i)=0$$ would imply $$Var(u_i|x_i)=sigma^2.$$
Similar Posts:
- Solved – Does absence of endogeneity problem means absence of heteroscedasticity
- Solved – Heteroscedasticity in Fixed Effects model
- Solved – Breusch–Pagan test for heteroscedasticity contradicts White’s test
- Solved – What are the consequences of having non-constant variance in the error terms in linear regression
- Solved – R : test heteroscedasticity in logit model