In basic machine learning we are taught the following "rules of thumb":

a) the size of your data should be at least 10 times the size of the VC dimension of your hypothesis set.

b) a neural network with N connections has a VC dimension of approximately N.

So when a deep learning neural network has say, millions of units, does this mean we should have, say, billions of data points? Can you please shed some light on this?

**Contents**hide

#### Best Answer

The rule of thumb you talk about cannot be applied to a neural network.

A neural network has some basic parameters, i.e. its weights and biases. The number of weights are dependent on the number of connections between the network layers and the number of biases are dependent on the number of neurons.

The size of data required highly depends on –

- The type of neural network used.
- The regularization techniques used in the net.
- The learning rate used in training the net.

This being said, the more proper and sure way to know whether the model is overfitting is to check if the validation error is close to the training error. If yes, then the model is working fine. If no, then the model is most likely overfitting and that means that you need to reduce the size of your model or introduce regularization techniques.

### Similar Posts:

- Solved – Bias initialization in convolutional neural network
- Solved – the difference between “learnable parameters” and “weights” in neural networks
- Solved – the relation between belief networks and Bayesian networks
- Solved – the relation between belief networks and Bayesian networks
- Solved – Updating of Filters in CNN