For tree-based models, I've used varImp in caret to extract feature importances; however, this doesn't work with KNN. Can someone explain why this is/if this is possible? Thanks!
Contents
hide
Best Answer
I don't know of a canned command, but you could always measure how much the mean-squared error (or misclassification rate) increases when a variable is either removed or permuted.
Similar Posts:
- Solved – How to extract global feature importances of a black box model from local explanations with LIME
- Solved – Feature Importance using decision tree – categorical feature one-hot encoding or not
- Solved – R: interpreting output of caret::train( ) with method glmStepAIC
- Solved – Boosted Trees: features importance of each class
- Solved – Hard in calculating predictor‘s Relative Importance for GAM