# Solved – glm in R – is the pvalue for Intercept important? Which pvalue represents the goodness of fit of entire model

I am running glms in R (generalised linear models). I thought I knew pvalues – until I saw that calling up a summary for a glm does not give you an overriding pvalue representative of the model as a whole – at least not in the place where linear models do.

I am wondering if this is given as the pvalue for the Intercept, at the top of the table of coefficients. So in the following example, while Wind.speed..knots and canopy_density may be significant to the model, how do we know whether the model itself is significant? How do I know whether to trust these values? Am I right to wonder that the Pr(>|z|) for (Intercept) represents the significance of the model? Is this model significant folks??? Thanks!

``Call: glm(formula = Empetrum_bin ~ Wind.speed..knots. + canopy_density,      family = binomial, data = CAIRNGORM)  Deviance Residuals:      Min       1Q   Median       3Q      Max   -1.2327  -0.7167  -0.4302  -0.1855   2.3194    Coefficients:                    Estimate Std. Error z value Pr(>|z|)   (Intercept)          1.8226     1.2030   1.515   0.1298   Wind.speed..knots.  -0.5791     0.2628  -2.203   0.0276 * canopy_density      -2.5733     1.1346  -2.268   0.0233 * --- Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1  (Dispersion parameter for binomial family taken to be 1)      Null deviance: 59.598  on 58  degrees of freedom Residual deviance: 50.611  on 56  degrees of freedom   (1 observation deleted due to missingness) AIC: 56.611 ``
Contents

The deviance is the -2*likelihood. To compare two models you take the difference in deviance and compare it to a chi-square statistic with the number of degrees of freedom that corresponds to the df difference in the models. If you get confused about whether to use R's qchisq or pchisq you can always test whether 3.84 and 1 deliver an answer of 0.95.

``> pchisq(3.84,1) [1] 0.9499565 ``

So your models are compared with:

``> pchisq(59.598  - 50.611, 2) [1] 0.9888186 ``

So in the tortured mismash of Fisherian and Neyman-Pearson methodologies that is the current dominant teaching strategy of conventional frequentist statistics, the p-value is 1-"significance" and tests whether the addition of two parameters to the model is statistically significant different than having no parameters:

``> 1-pchisq(59.598  - 50.611, 2) [1] 0.01118144 ``

Th p-value of the Intercept is basically meaningless in most situations. Intercept is the estimated binomial parameter for the proportion of events, and so testing it with the reported p-value is a test of a proportion of events in the "base category" of 50%. ( I have no idea why Simon O'Hanlon says this model comparison is "not significant. The statement that "the model goes through zero" in any situation is gibberish.) And you need to know that a single model in and of itself is significant is also meaningless. You need to adopt a perspective of comparing models. The p-value I derived is the result of comparing a model with no covariates to one involving two covariates.

Rate this post