# Solved – How to calculate F-Measure from Precision Recall Curve

I have a precision recall curve for two separate algorithms. If I want to calculate the F-Measure I have to use the precision and recall values at a particular point on each curve.

How is this point decided? For example on curve one there is a point where recall is 0.9 and precision is 0.87 and the other curve there is a point of recall at 0.95 and precision at 0.84.

Alternatively, should I plot a F-measure curve for every precision recall value?

Contents

Precision-Recall curve and ROC curve (doesn't matter they are just the mirror images of each other) are used to give you the sense of the quality of the binary classifier for the different values for some parameter that affects the performance of your classifier. Now, F1 are particular scores which combine both precision and recall into a single one, so that way you just need to select the configuration of your classifier which has the highest F score.

In your place for each pair of precision and recall I would calculate F score and then pick the configuration which has the highest F score.

Now, the tricky part is which F score. F1 is the score which values precision and recall the same, but sometimes the recall is more important than precision (for example, you don't mind having a lot of people falsely tested for some cancer if you know that all of the ones who have that cancer are tested). In that case you could use F2 measure.

I think it doesn't make sense to sum up all F measures for all combinations of precision and recall. After all, the idea is to pick a single model out of the broader range of models, I would prefer to pick a model with the highest value of F score instead of the one with the biggest sum of all F scores.

Rate this post