Solved – Loss function that penalizes bigger errors

I'm building a machine learning model that realizes sales predictions based on a set of features, but for this specific problem it would not be important to have a spot-on prediction.

The problem is that with the MSE loss function I'm getting some predictions that get spot-on predictions on part of the validation data, and gets a somewhat high error on other points.

So, I was thinking if there is a established function that would help the algorithm prioritize models without grotesque errors.

Thanks in advance.

Often MSE actually penalizes the largest errors too much – being very wrong on a few outliers may be OK if the model is usually more or less right.

Since this is sales prediction, don't you have a loss function? i.e. the cost to the business of an error? On the face of things this is more likely to be the absolute error rather than the squared error.

It might be time to think about your descriptors. No loss function can make a good model if the input data is not predictive of the output.

Similar Posts:

Rate this post

Leave a Comment