I have a neural network that I constructed in keras
that goes from a LSTM recurrent layer > dropout > flattened > dense layer of 1 unit.
Does this make sense to have dropout regularization at this stage? Would this be creating sparsity in the penultimate layer or would it be evening out the connections to the final prediction?
bidirectional_1 (Bidirection (None, 60, 512) 1181696 _________________________________________________________________ dropout_2 (Dropout) (None, 60, 512) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 30720) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 30721 =================================================================
Similar Posts:
- Solved – My NN in keras predict values always between 0 and 1
- Solved – It is always necessary to include a Flatten layer after a set of 2D convolutional layers for convolutional neural networks in Keras
- Solved – What does it mean to say that CNN has sparse connections
- Solved – What does it mean to say that CNN has sparse connections
- Solved – the difference between capital X-bar vs. x-bar vs. mu