Sequences occur everywhere in our daily life. Some of the examples include sensor data, stock market quotes, speech signals, and many more. A sequence is a collection of elements where each element is indexed. Repetitions are allowed in this case, which means any element can reappear in a given sequence. If we look closely, we can see that sequences are rich in information. In theory, we can design sequences with amazing characteristics and study them. This allows us to approximate real world processes using these sequences so that we can estimate what’s going to happen in the future. Cauchy sequence is one such sequence that’s very fundamental to a lot of fields. Let’s dig deeper and see why it’s relevant, shall we? Continue reading “Cauchy Sequences In The Real World”

# Tag: Sequential Data

# Measuring The Memory Of Time Series Data

Time series data has memory. It remembers what happened in the past and avenge any wrongdoings! Can you believe it? Okay the avenging part may not be true, but it definitely remembers the past. The “memory” refers to how strongly the past can influence the future in a given time series variable. If it has a strong memory, then we know that analyzing the past would be really useful to us because it can tell us what’s going to happen in the future. If you need a quick refresher, you can check out my blog post where I talked about memory in time series data. We have a high level understanding of how we can classify time series data into short memory and long memory, but how do we actually measure the memory? Continue reading “Measuring The Memory Of Time Series Data”

# Deep Learning For Sequential Data – Part IV: Training Recurrent Neural Networks

In the previous blog post, we learnt how Recurrent Neural Networks (RNNs) can be used to build deep learning models for sequential data. Building a deep learning model involves many steps, and the training process is an important step. We should be able to train a model in a robust way in order to use it for inferencing. The training process needs to be trackable and it should converge in a reasonable amount of time. So how do we train RNNs? Can we just use the regular techniques that are used for feedforward neural networks? Continue reading “Deep Learning For Sequential Data – Part IV: Training Recurrent Neural Networks”

# Deep Learning For Sequential Data – Part III: What Are Recurrent Neural Networks

In the previous two blog posts, we discussed why Hidden Markov Models and Feedforward Neural Networks are restrictive. If we want to build a good sequential data model, we should give more freedom to our learning model to understand the underlying patterns. This is where Recurrent Neural Networks (RNNs) come into picture. One of the biggest restrictions of Convolutional Neural Networks is that they force us to operate on fixed size input data. RNNs know how to operate on sequences of data, which is exciting because it opens up a lot of possibilities! What are RNNs? How do we construct them? Continue reading “Deep Learning For Sequential Data – Part III: What Are Recurrent Neural Networks”

# Deep Learning For Sequential Data – Part II: Constraints Of Traditional Approaches

In the previous blog post, we discussed the nature of sequential data and why we need a robust separate modeling technique to analyze that data. Traditionally, people have been using Hidden Markov Models (HMMs) to analyze sequential data, so we will center the discussion around HMMs in this blog post. HMMs have been implemented for many tasks such as speech recognition, gesture recognition, part-of-speech tagging, and so on. But HMMs place a lot of restrictions as to how we can model our data. HMMs are definitely better than using classical machine learning techniques, but they don’t fully cover the needs of all the modern data analysis. This is because of the constraints that are used to build HMMs. What are those constraints? Continue reading “Deep Learning For Sequential Data – Part II: Constraints Of Traditional Approaches”

# Deep Learning For Sequential Data – Part I: Why Do We Need It

Most of the current research on deep learning is focused on images. Deep learning is being actively applied to many areas, but image recognition is definitely generating a lot of buzz. Deep neural networks are being used for image classification tasks and they are able to outperform all the other approaches by a big margin. The networks that are used here are traditional feedforward neural networks that learn how to classify data by generating the optimal feature representation. These neural networks severely limited when it comes to sequential data. Time series data is perhaps the most popular form of sequential data. Why can’t we use feedforward neural networks analyze sequential data? Continue reading “Deep Learning For Sequential Data – Part I: Why Do We Need It”