Solved – How to avoid local minimum in recurrent neural network

I have trained a recurent neural network on tensirflow so there is no need t initialize my paraemters,it will be done automatically into the tf.dynamic_rnn so when i train my model ,i don't get right predictions all the time.I mean when i compile my code,i get sometimes good predictions but sometimes my model fail so i think this is due to the fact that gradient descent is sometimes stuck on a local minimum so what can i do , i did the tf.train.AdamOPtimizer(0.01).minimize(loss) .what is the best optimizer that avoids local minimum

When neural networks are stuck in a local minimum the problem is usually the activation function. Which one works best? That changes from project to project. Most of the time, we find the best activation function by trial and error…welcome to world of machine learning!

I seriously over-complicated this in the beginning. I researched for hours and hours which activation function is "best" or when to use each one. The answer was so simple that I couldn't believe it. The answer I received from many, many modelers without hesitation or variation was "whichever one works best."

Hope this helps.

Similar Posts:

Rate this post

Leave a Comment