Before we start, I want to clarify that this post is not about treasure hunting! As you read along, the title will start making sense. In one of my previous blog posts, I have discussed speech recognition and a few ways to model the problem. I have also talked about how we can use machine learning to solve various real life problems. A lot of times, we need to model temporal events. Temporal events are things that happen over a period of time. Sometimes we know everything about a system, and so we just predict what’s going to happen next. What if we don’t know everything about a system? What if we can just see the effects of that system? Can we learn about a system even though we cannot directly observe what’s happening inside? Continue reading “Uncovering The Hidden Treasure”
I Came. I Heard. I Understood.
I’m pretty sure all of us have encountered a speech recognition system in our lives. Speech recognition is used in smartphones, automated customer service, and many other high-end gadgets. It’s being increasingly integrated in many devices to provide a better hands-free user experience. Apple came up with Siri for iPhone and most of the Android phones have speech recognition enabled in some form or the other. But how does it actually happen? Most people are annoyed by the quality of the speech recognition systems, and I don’t blame them for it. This happens mostly because they have surprisingly little knowledge about how their words are actually understood by the machines. I have worked on speech recognition in the past and so I just wanted to take a stab at it to explain what happens under the hood. Continue reading “I Came. I Heard. I Understood.”