Solved – Understanding Mean Squared Error

When determining the quality of an estimator, I understand a simple metric such as, if the expected and predicted are close, then we consider this instance to be correct. Then sum up all correct and divide by total. This will give you a measure of accuracy.

But mean squared error is harder for me to understand. What is a good value of MSE? How close to 0 does it need to be to consider it as "good"? Is it only the relative values of MSE which we need to consider to determine if one estimator is "better" than another?

Is there any way to normalize MSE so that it more closely represents an accuracy as a percentage of total instances?

There is no way to interpret a MSE (mean square error) without a context. The reason is that the MSE has the measurement unit of the data, so a change of scale will change the MSE. What is a small/medium/large MSE then depends on the research field, the units and the goal of the analysis.

Similar Posts:

Rate this post

Leave a Comment