In a comment to the answer of this question, it was stated that using AIC in model selection was equivalent to using a p-value of 0.154.

I tried it in R, where I used a "backward" subset selection algorithm to throw out variables from a full specification. First, by sequentially throwing out the variable with the highest p-value and stopping when all p-values are below 0.154 and, secondly, by dropping the variable which results in lowest AIC when removed until no improvement can be made.

It turned out that they give roughly the same results when I use a p-value of 0.154 as threshold.

Is this actually true? If so, does anyone know why or can refer to a source which explains it?

P.S. I couldn't ask the person commenting or write a comment, because just signed up. I am aware that this is not the most suitable approach to model selection and inference etc.

**Contents**hide

#### Best Answer

Variable selection done using statistical testing or AIC is highly problematic. If using $chi^2$ tests, AIC uses a cutoff of $chi^2$=2.0 which corresponds to $alpha=0.157$. AIC when used on individual variables does nothing new; it just uses a more reasonable $alpha$ than 0.05. A more reasonable (less inference-disturbing) $alpha$ is 0.5.

### Similar Posts:

- Solved – How to do model selection for Cox Proportional Hazards Model when stratifying
- Solved – glmnet in R: Selecting the right $alpha$
- Solved – Impute missing data before or after feature selection
- Solved – AIC BIC Mallows Cp Cross Validation Model Selection
- Solved – Regression with small data set and many independent variables