Came across a few plots in Chapter 2 of Introduction to Statistical Learning and saw that the x-axis on some measure model flexibility. However, the book doesn't seem to mention how model flexibility is actually measured, or what units it is measured in.
Best Answer
Many learning algorithms have hyperparameters that control what could could be described as model flexibility or complexity. The purpose of these hyperparameters is to control the bias/variance tradeoff (which that section of ESL explains). The x axis of the figure you posted is probably labeled "flexibility" because the figure is meant to illustrate the general phenomenon of how such hyperparameters affect bias, variance, and generalization performance (rather than being tied to the hyperparameters of a particular model, or a particular definition of complexity).
Greater flexibility corresponds to lower bias but higher variance. It allows fitting a wider variety of functions, but increases the risk of overfitting. Achieving good generalization performance requires finding hyperparameter values that achieve a good balance between bias and variance.
Similar Posts:
- Solved – Variance/bias trade off regularisation penalty – why does it take this form
- Solved – Why will the validation set error underestimate the generalisation error
- Solved – Question about bias-variance tradeoff
- Solved – Why and when do we need to tune hyperparameters
- Solved – Model selection: before or after nested cross-validation