Solved – Bootstrapping for neural network validation

I need to validate a specific/trained neural network for classification, and I'm planning to use bootstrapping for this purpose. My idea is to keep fixed the trained network and generate bootstrap samples only for the test set, and then obtain statistics about accuracy/false pos/false neg for that fixed network.

Do you think this approach is adequate? I'm asking because, as far as I've seen, the typical approach is to train the network at each bootstrap sample (after holding out the test samples). Instead, I want to obtain statistics about classification accuracy for a specific network.

Another question: in my case, the samples are obtained uniformly from $mathbb{R}^n$, unlike the typical case when you resample from a finite set of observations. Then, is it still correct to speak of "bootstrapping"?

Thank you!

Bootstrap is a statistical tool help us emulate the process of acquiring new sample set, usually when it is impossible or inconvenient to do so. It resamples from an existing sample set: If you are resampling from the training set to generate the testing sets, it will not be adequate since all these testing cases have been used during your training; if you are resampling from $mathbb{R}^n$, this is randomly, independently generating test set from the population, which is adequate, but cannot be called bootstrap.

Similar Posts:

Rate this post

Leave a Comment