Solved – Understanding condition index used for finding multicollinearity

In linear models, in my book, the condition index is defined as $sqrt{lambda_{max} over lambda_{min}}$ where $lambda_{max}$ is the maximum eigenvalue of $ZZ^*$, i.e., the correlation matrix of the independent variables, and $lambda_{min}$ is the minimum eigenvalue.

I could not find out why the condition index helps me to find out multicollinearity. So, I've come up with my own explanation:

In principal components analysis, eigenvalues explain the variance of data in each of the eigenvectors. Thus, having a large ratio of max eigenvalue to min eigenvalue means that the data can be explained by a smaller number of eigenvectors than the number of independent variables, which means that there is collinearity among independent variables.

Given these, here are my questions.

  1. Is my own explanation correct?

  2. What's a better explanation? (Please understand that I do not have strong linear algebra knowledge. So, please explain in very easy words.)

Your thinking is basically correct.

Let $Z$ be an $M times N$ matrix, i.e., $N$ observations of $M$ random variables (or features).

A condition number that "equals infinity" implies that, for any of the $M$ observations, any one of the $N$ variables can be described as a weighted sum of the other $(N-1)$ variables. That defines exact multicollinearity.

Appendix: $lambda_{min} = 0$ implies that there exists a nonzero eigenvector $q$ such that

$$ZZ^Tq = lambda_{min}q = 0$$

$$Rightarrow q^TZZ^Tq = lambda_{min}q^Tq = lambda_{min} 1 = 0.$$

Since $0 = q^T ZZ^T q = (Z^T q)^2 geq 0 $,

$$Z^T q = 0$$

which implies that the nullspace of $Z^T$ is non-trivial.

I realize this doesn't address the case when $0 < lambda_{min} ll lambda_{max}$, i.e. approximate multicollinearity.

Similar Posts:

Rate this post

Leave a Comment