Solved – Equivalence of AIC and p-values in model selection

In a comment to the answer of this question, it was stated that using AIC in model selection was equivalent to using a p-value of 0.154.

I tried it in R, where I used a "backward" subset selection algorithm to throw out variables from a full specification. First, by sequentially throwing out the variable with the highest p-value and stopping when all p-values are below 0.154 and, secondly, by dropping the variable which results in lowest AIC when removed until no improvement can be made.

It turned out that they give roughly the same results when I use a p-value of 0.154 as threshold.

Is this actually true? If so, does anyone know why or can refer to a source which explains it?

P.S. I couldn't ask the person commenting or write a comment, because just signed up. I am aware that this is not the most suitable approach to model selection and inference etc.

Variable selection done using statistical testing or AIC is highly problematic. If using $chi^2$ tests, AIC uses a cutoff of $chi^2$=2.0 which corresponds to $alpha=0.157$. AIC when used on individual variables does nothing new; it just uses a more reasonable $alpha$ than 0.05. A more reasonable (less inference-disturbing) $alpha$ is 0.5.

Similar Posts:

Rate this post

Leave a Comment