Solved – Making a model to predict the error of another model

So basically I have a machine learning model where I want to have a prediction interval, the model is XGBoost so it is tricky to do Quantile Regression and I was looking for an alternative method to achieve this goal.

I have came across this article that suggests to estimate the standard deviation for the prediction by creating another machine learning model to predict the error of the original machine learning model. So the process would be:

  1. Fit machine learning model to training data.
  2. Calculate the error for each datapoint.
  3. Fit a second machine learning model to predict the squared error of the datapoints.
  4. When making a prediction use the initial prediction as mean and the square root of the prediction of the squared error as an estimate of the standard deviation.
  5. With the mean and std deviation build a confidence interval for the prediction.

The assumptions here is that the distribution of the predictions is Normal and that we can estimate the mean and standard deviation in the way mentioned.

My question is, does this method make sense? And is there a better way to do this?

While I cannot fully evaluate the approach cited in the blog post you pointed to, I can at least propose another way of obtaining confidence intervals, which is via bootstrapping.

Bootstrap methods are convenient mainly for two reasons: they're simple to implement and doen't need assumption of the underlying statistics you are bootstrapping.

The implementation follows, in general, the following pseudocode:

statistics = [] for i in bootstraps:     train, test = select_sample_with_replacement(data, size)     model = train_model(train)     stat = evaluate_model(test)     statistics.append(stat) 

which comes from this blog post, that has a really nice intro presentation on how to use bootstrapping to evaluate ML models.

Also check this really nice answer explaining why bootstrap works.

Similar Posts:

Rate this post

Leave a Comment