Solved – Use of likelihood ratio test/ANOVA for significance testing

I've read that likelihood ratio tests comparing two models (one with and without a predictor) should be performed to determine whether a variable of interest is statistically significant, rather than using the p-values for estimates of individual predictors from the summary() function of a linear model.

I've also read that this is only necessary when the model includes factors with more than two levels.

I am trying to find out whether the second statement is correct but have been unable to find out whether or not LRT/ANOVA is necessary for models with factors containing only two levels.

Contents

You can test the nested models using either Wald or likelihood ratio testing. Wald would be the standard way to go with a linear model. The reduced model only has the continuous predictor, and then the full model has the continuous predictor plus the others. Your null is that the other predictors do not influence the outcome, and the alternative is that they do influence the outcome.

Wald and likelihood ratio methods test these hypotheses in somewhat different ways but more-or-less aim to justify the inclusion of additional predictors. The fit never decreases when you add predictors, but is the increase in fit worth the added complexity?

Wald compares the ratio of squared errors to an $$F$$-distribution (sound familiar from ANOVA?), while likelihood ratio compares the ratio of likelihoods to a $$chi^2$$ distribution. I'm going from memory and might have missed some details, but these should look somewhat familiar.

$$text{**Wald Test**}$$

$$dfrac{(SSE_{reduced}-SSE_{full})/(n-p_{full})}{SSE_{reduced}/(p_{full}-p_{reduced})}sim F_{n-p_{full}, p_{full}-p_{reduced}}$$

$$text{**Likelihood-ratio Test**}$$

$$[LLik_{full} – LLik_{reduced}] sim chi^2_{text{difference in parameter counts of the nested full and reduced models}}$$

Rate this post