Solved – Neural-Network- Training and Testing; Validation performance analysis

When I trained ANN model, sometime training set has high R-square but testing set has low R-square. How to explain this situation?
Is there any over-fitting problem? (Show in figure)

enter image description here

In addition to what written by @enricoanderlini, I would also point out that there don't seem to be many datapoints in your test set. Having a (small) number of points, exposes you, even more, to the impact of leverage / influential points (you can also read about it here). With regards to this, @enricoanderlini is correct when he suggests cross-validation.

Interestingly, while the focus is often on model capacity (degrees of freedom), your set-up highlights that ANN calibration, in a train-validate-test pipeline, is "hungrier" of datapoints than, say, just a OLS with a train-test split.

Similar Posts:

Rate this post

Leave a Comment