Solved – Binary classifiers with accuracy < 50% in Adaboost

For a balanced binary training dataset i.e number of data points with class +1 are equal to number of data points with class -1 , what will happen if we use weak binary classifiers whose classification accuracy is less
than 50%, say 30%? Will the performance of the combined classifier improve with more iterations?

I am aware that binary classifiers who perform worse than 50%, still contribute to final prediction, but will the performance of combined classifiers improve with more iterations ?

AdaBoost automatically adapts to a classifier that gives a below 50% accuracy by flipping its prediction. Meaning, a below-50% weak classifier becomes an above-50% weak classifier by flipping its prediction.

The weight formula is where this flipping happens:

$$w_t=frac{1}{2}mbox{log}(frac{1}{epsilon_t} – 1)$$

where $t$ is the current iteration. For $0.5 < epsilon_t < 1$, weight becomes negative, and later it is multiplied by each predicted label $h_t(x_i) in {+1, -1}$to flip it.

This means that classifiers at each and every iteration have an above 50% accuracy since their weighted prediction is used in the final combined prediction.

Similar Posts:

Rate this post

Leave a Comment