Estimating The Predictability Of Time Series Data – Part I

1 mainTime series data refers to a sequence of measurements made over time. The frequency of these measurements are usually fixed, say once every second or once every hour. We encounter time series data in a variety of scenarios in the real world. Some examples include stock market data, sensor data, speech data, and so on. People like to build forecasting models for time series data. This is very relevant in modeling data in the world of Internet of Things (IoT). Based on the past data, they want to predict what’s going to happen in the future. Once of the most important questions is to see whether or not we can predict something in the first place. How do we determine that? How do we check if there are underlying patterns in the time series data?   Continue reading

Understanding The Industrial IoT Technology Stack

1 mainInternet of Things (IoT) has emerged as one of the hottest trends in the technology world. It has the potential to radically change the way we experience life. It will particularly have a huge impact on the industrial world where we have to deal with massive machines, buildings, and open fields. Industrial technologies have direct impact on some of the most pressing problems facing humanity like water shortage, energy consumption, infrastructure management, and so on. When we apply IoT methodologies to the industrial world, it is called Industrial IoT. There has been a lot of discussion as to what exactly is it. Is it a technology? Is it a collection of things? More importantly, there has been a lot of misinformation around it. Let’s go ahead and dissect it, shall we?   Continue reading

Deep Learning For Sequential Data – Part V: Handling Long Term Temporal Dependencies

1 mainIn the previous blog post, we learnt why we cannot use regular backpropagation to train a Recurrent Neural Network (RNN). We discussed how we can use backpropagation through time to train an RNN. The next step is to understand how exactly the RNN can be trained. Does the unrolling strategy work in practice? If we can just unroll an RNN and make it into a feedforward neural network, then what’s so special about the RNN in the first place? Let’s see how we tackle these issues.   Continue reading

Deep Learning For Sequential Data – Part IV: Training Recurrent Neural Networks

1 mainIn the previous blog post, we learnt how Recurrent Neural Networks (RNNs) can be used to build deep learning models for sequential data. Building a deep learning model involves many steps, and the training process is an important step. We should be able to train a model in a robust way in order to use it for inferencing. The training process needs to be trackable and it should converge in a reasonable amount of time. So how do we train RNNs? Can we just use the regular techniques that are used for feedforward neural networks?   Continue reading

Deep Learning For Sequential Data – Part III: What Are Recurrent Neural Networks

1 mainIn the previous two blog posts, we discussed why Hidden Markov Models and Feedforward Neural Networks are restrictive. If we want to build a good sequential data model, we should give more freedom to our learning model to understand the underlying patterns. This is where Recurrent Neural Networks (RNNs) come into picture. One of the biggest restrictions of Convolutional Neural Networks is that they force us to operate on fixed size input data. RNNs know how to operate on sequences of data, which is exciting because it opens up a lot of possibilities! What are RNNs? How do we construct them?   Continue reading

Deep Learning For Sequential Data – Part II: Constraints Of Traditional Approaches

1 mainIn the previous blog post, we discussed the nature of sequential data and why we need a robust separate modeling technique to analyze that data. Traditionally, people have been using Hidden Markov Models (HMMs) to analyze sequential data, so we will center the discussion around HMMs in this blog post. HMMs have been implemented for many tasks such as speech recognition, gesture recognition, part-of-speech tagging, and so on. But HMMs place a lot of restrictions as to how we can model our data. HMMs are definitely better than using classical machine learning techniques, but they don’t fully cover the needs of all the modern data analysis. This is because of the constraints that are used to build HMMs. What are those constraints?   Continue reading

Deep Learning For Sequential Data – Part I: Why Do We Need It

1 mainMost of the current research on deep learning is focused on images. Deep learning is being actively applied to many areas, but image recognition is definitely generating a lot of buzz. Deep neural networks are being used for image classification tasks and they are able to outperform all the other approaches by a big margin. The networks that are used here are traditional feedforward neural networks that learn how to classify data by generating the optimal feature representation. These neural networks severely limited when it comes to sequential data. Time series data is perhaps the most popular form of sequential data. Why can’t we use feedforward neural networks analyze sequential data?   Continue reading