In the previous blog post, we learnt why we cannot use regular backpropagation to train a Recurrent Neural Network (RNN). We discussed how we can use backpropagation through time to train an RNN. The next step is to understand how exactly the RNN can be trained. Does the unrolling strategy work in practice? If we can just unroll an RNN and make it into a feedforward neural network, then what’s so special about the RNN in the first place? Let’s see how we tackle these issues. Continue reading

# Tag Archives: Recurrent Neural Networks

# Deep Learning For Sequential Data – Part IV: Training Recurrent Neural Networks

In the previous blog post, we learnt how Recurrent Neural Networks (RNNs) can be used to build deep learning models for sequential data. Building a deep learning model involves many steps, and the training process is an important step. We should be able to train a model in a robust way in order to use it for inferencing. The training process needs to be trackable and it should converge in a reasonable amount of time. So how do we train RNNs? Can we just use the regular techniques that are used for feedforward neural networks? Continue reading

# Deep Learning For Sequential Data – Part III: What Are Recurrent Neural Networks

In the previous two blog posts, we discussed why Hidden Markov Models and Feedforward Neural Networks are restrictive. If we want to build a good sequential data model, we should give more freedom to our learning model to understand the underlying patterns. This is where Recurrent Neural Networks (RNNs) come into picture. One of the biggest restrictions of Convolutional Neural Networks is that they force us to operate on fixed size input data. RNNs know how to operate on sequences of data, which is exciting because it opens up a lot of possibilities! What are RNNs? How do we construct them? Continue reading