Solved – Statistical significance test between 2 partial correlations

does anyone know of an appropriate analysis for testing for a significant difference between two partial correlations? For example, say there are independent variables X and Z and dependent variable Y. How might I test for a significant difference between the relationships of X on Y partialling Z and Z on Y partialling X? Could I just use the same formula as when comparing dependent correlations and use the partial correlation values? The formula I have found for comparing dependent correlations is as follows:
$$T_{difference} = (r_{xy} – r_{zy})frac{sqrt{(n-3)(1+r_{xz})}}{sqrt{2(1-r^2_{xy}- r^2_{zy}- r^2_{xz}+2r_{xy}r_{zy}r_{xy})}}$$

There are several calculators online that can perform this task for you. I would recommend the one by Preacher's group out of Vanderbilt. Note that the Steiger (1980) reference listed on the site suggests using a z-transformation prior to calculating the differences for small samples, which is different from the equation above. The reason for making this transformation is that correlations are not normally distributed statistics (as a result the $T_{diff}$ approach above can lead to increased type I error under conditions of smallish $N$s).


After some additional searching it does not appear as though it is common to calculate the significance of partial correlations in the way described in the OP. I did find one reference in which the authors (Willis, Dodd, & Palermo, 2013) used William's $T_2$ to do so, however. The formula for Williams' test (slightly different from above) statistic is:

$$T_2=(r_{xy} – r_{zy})frac{sqrt{(n-1)(1+r_{xz})}}{sqrt{2(frac{n-1}{n-3})|R|+bar{r}^2(1-r_{xz})^3}}$$


$$|R| = (1-r^2_{xy}- r^2_{zy}- r^2_{xz}+2r_{xy}r_{zy}r_{xy})$$


$$bar{r} = frac{1}{2}(r_{xy} + r_{zy})$$

It is important of course to note that based on the OP, it is not really the difference between $r_{xy}$ and $r_{zy}$ that is of interest but rather, $r_{xy.z}$ and $r_{zy.x}$. As I know of no methodological paper supporting the use of partial correlations in place of zero-order correlations when using the $T_2$ formula, I am hard pressed to advocate its usage here (even if other authors have published results doing so in a refereed journal).

Instead, I am more inclined to suggest an approach along the lines of the one @Jeremy Miles suggested in the comments above. Structural equation models have the ability of addressing this question in a fairly straightforward manner. For instance, using lavaan syntax in R, one could setup two regression models, one with the paths x –> y and z –> y constrained to be equal, and one in which the paths are freely estimated:

 mod1<-'  y~a*x+a*z  '  fit1<-sem(mod1, data=your.DF)   mod2<-'  y~a*x+b*z  '  fit2<-sem(mod2, data=your.DF)  anova(fit1, fit2) 

If the constrained model is no worse than the unconstrained model in terms of data-model fit, then the chi-square difference test returned by the anova() function will be non-significant. Practically speaking that means that the two paths do not differ from one another. (Side note: this chi-square difference test should be the same as the model chi-square for model 1 – if the model becomes more complex, this basic approach can still be used, however it may be the case that the chi-square difference test and the model 1 chi square statistic are no longer equivalent).

In order to appropriately test the null hypothesis $rho_{xy.z}=rho_{zy.x}$, you will just need to make sure that x and z are on the same scale (standardized is probably the most obvious choice) prior to running the models above.

Steiger, J. H. (1980). Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87, 245-251.

Willis, M. L., Dodd, H. F., & Palermo, R. (2013). The relationship between anxiety and social judgments of approachability and trustworthiness. PLOS One, 8(10).

Similar Posts:

Rate this post

Leave a Comment