Solved – Multiclass Summary Metrics in R’s Caret when predicting Probabilities

It's not clear to me, how the different summary metrics for caret's train function are defined, when I predict probabilities of a multiclass problem.

Without a loss of generality, assume that I use a random forest to produce probabilities:

library(caret) data(iris)  control = trainControl(method="CV", number=5,verboseIter = TRUE,classProbs=TRUE) # iv) tuning parameter  grid = expand.grid(mtry = 1:3) rf_gridsearch = train(y=iris[,5],x=iris[-5],method="ranger", num.trees=2000, tuneGrid=grid, trControl=control) rf_gridsearch  # Output: ....   mtry  Accuracy   Kappa   1     0.9600000  0.94    2     0.9666667  0.95    3     0.9666667  0.95  Accuracy was used to select the optimal model using  the largest value. The final value used for the model was mtry = 2. 

For the binary case I know how to compute the AUC:

So my questions are:

  1. How is the Accuracy computed for mutliclass prediction? (How is it condensed to a single value)

  2. Why is mtry=2 chosen, even if mtry=3 is equivalent according to the Accuracy?

  3. Why isn't it possible to compute the RMSE for a classification problem? (Throwing the error "Error: Metric RMSE not applicable for classification models" if metric="RMSE" is used. Furthermore, for multiclass prediction the RMSE is still defined – other then the AUC.)
    Since, the Brier Score is simply the MSE (for 2 classes), why not allowing to use the RMSE for probability predictions?

Many thanks in advance!

The optimal model uses mtry=2 because it has found the inflection point in Accuracy (0.9666667), and at mtry=2 it requires less effort and achieves the same accuracy as mtry=3.

Also, you cannot compute Root Mean Squared Error on classification problems, because it doesn't make any sense to do so. RMSE is computed for regression problems, as they produce numerical predictions that have a distance metric defined for them — you can compute the Error between a target data value and its prediction. No distance metric — no RMSE (unless you define an error metric for your classification problem, in which case, this will not be a standard RMSE).

Similar Posts:

Rate this post

Leave a Comment