Solved – somehow compute variance explained by PC after Oblique rotation in PCA

Let´s say that my PCA analysis extracted 2 components, which explain 80% of the variance before rotation. The components were then rotated using oblique (Direct Oblimin) rotation, so SPSS cannot compute how much percentage of variance each component explain. When I plot the graph, usually editors require these percentages in brackets after the components. So I want to compute relative percentage of variance for each component. So if data looks like this


PC1 accounts for 60% of variance, eigenvalue 6.000;
PC2 accounts for 20% of variance, eigenvalue 2.000;
Total – 80% variance, eigenvalue 8.000;


PC1: eigenvalue 5.000;
PC2: eigenvalue 4.000;

can I compute these percentages this way?
I expect, that after rotation total variance explained by both components doesn´t change, so it should be 80%. Am I right?

In unrotated solution, you can compute variance explained by component 2 this way: eigenvalue of component 2/total eigenvalue * total percentage explained = 2/8*80%=20%

We can also compute, that 1.000 eigenvalue = 10% of variance

Can I use this equation (1.000 eigenvalue=10%) to compute variance explained by each component after rotation, so variance explained by PC1 after rotation should be 50% (eigenvalue=5 so 50%) and variance explained by PC2 after rotation should be 40%(eigenvalue=4 so 40%)? Of course in this case we cannot compute total variance by PC1+PC2 because components are correlated and total variance should be still 80% (maybe).

I don´t think that it is so easy, because SPSS would give me these numbers. So can I somehow compute (from data above line) how much % of variance is explained by PC1 after oblique rotation?


I think you're on solid ground. Another useful thing to do is to call up a loading plot ('/plot rotation'), then to reanalyze using varimax rotation and again ask for a loading plot. But I hope you're analyzing objective data and not opinion data: using PCA on the latter is well-known to be a mistake, because it treats as part of an underlying dimension some information that should be treated as unique to particular variables, or even as measurement error.

Similar Posts:

Rate this post

Leave a Comment