Solved – one of the most successful applications using LSTM (Long Short-Term Memory) for a time series dataset

I start to study LSTM, but as usual, I am also pretty much interested in best results from LSTM for a Time-series dataset. Then we would have better understanding of the motivation on such a model. 🙂

One of the most successful applications using LSTM (Long Short-Term Memory) for a time series dataset is speech recognition. Over the last few years, all major speech recognition engines (Dragon Professional Individual, Amazon Alexa, Baidu speech recognition, Microsoft speech recognition, Google, etc.) have switched to neural networks (LSTM most of the time, if not all the time, or close variants).

Example from {1}:

When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.

Example from {2}:

As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets.


References:

Similar Posts:

Rate this post

Leave a Comment