In this blog post, we will be using a bit of background from my previous blog post. If you are familiar with the basics of entropy coding, you should be fine. If not, you may want to quickly read through my previous blog post. So coming to the topic at hand, let’s continue our discussion on entropy coding. Let’s say we have a stream of English alphabets coming in, and you want to store them in the best possible way by consuming the least amount of space. So you go ahead and build your nifty entropy coder to take care of all this. But what if you don’t have access to all the data? How do you know what alphabet appears most frequently if you can’t access the full data? The problem now is that you cannot know for sure if you have chosen the best possible representation. Since you cannot wait forever, you just wait for the first ‘n’ alphabets and build your entropy coder hoping that the rest of the data will adhere to this distribution. Do we end up suffering in terms of compression by doing this? How do we measure the loss in quality? Continue reading “What Is Relative Entropy?”

# Tag: compression

# What Is Entropy Coding?

Entropy Coding appears everywhere in modern digital systems. It is a fundamental building block of data compression, and data compression is pretty much needed everywhere, especially for internet, video, audio, communication, etc. Let’s consider the following scenario. You have a stream of English alphabets coming in and you want to store them in the best possible way by consuming the least amount of space. For the sake of discussion, let’s assume that they are all uppercase letters. Bear in mind that you have an empty machine which doesn’t know anything, and it understands only binary symbols i.e. 0 and 1. It will do exactly what you tell it to do, and it will need data in binary format. So what do we do here? One way would be to use numbers to represent these alphabets, right? Since there are 26 alphabets in English, we can convert them to numbers ranging from 0 to 25, and then convert those numbers into binary form. The biggest number, 25, needs 5 bits to be represented in binary form. So considering the worst case scenario, we can say that we need 5 bits to represent every alphabet. If have to store 100 alphabets, we will need 500 bits. But is that the best we can do? Are we perhaps not exploring our data to the fullest possible extent? Continue reading “What Is Entropy Coding?”

# Reading JPEG Into A Byte Array

Let’s say you are working with images for your project. A lot of times, when you have to work across multiple platforms, the encoding doesn’t remain the same. In these scenarios, you cannot process images directly by treating them like 2D matrices. One of the most common image formats you will comes across is JPEG. If you are working on the same platform using a library like OpenCV, you can directly read JPEG files into 2D data structures. If not, you will have to read it into a byte array, process it and then encode it back. How do we do that? Continue reading “Reading JPEG Into A Byte Array”

# Digital Watermarking

Let’s say you want to verify the authenticity of a signal. The signal can take any form like an image, audio, video, or any other kind of bit stream. By now, everybody would have heard the term “watermark” being used in the general sense. The most common example would be currency notes. Watermarks are embedded to verify the authenticity of the notes. But how do we achieve that with more complicated signals? As things move into the virtual world, where the threats are elevated to a much higher and abstract level, we need a way to verify the authenticity of different forms of digital signals. How do we do it? Continue reading “Digital Watermarking”

# Image Steganography

As discussed in my previous post, steganography is the art of hiding the fact that communication is taking place. We achieve this by hiding original information inside other information known as carrier files. Many different carrier file formats can be used, but digital images are the most popular because of their frequency of occurrence on the internet. For hiding secret information in images, there exists a large variety of steganographic techniques, some are more complex than others, and all of them have respective strong and weak points. Different applications have different requirements of the steganography technique used. For example, some applications may require absolute invisibility of the secret information, while others require a larger secret message to be hidden. How do we achieve this? How robust is it? Continue reading “Image Steganography”

# The Ramifications Of H.265

The International Telecommunication Union (ITU) recently approved the next generation video format known as H.265, which would be the successor to the current H.264 standard. H.265, also informally known as High Efficiency Video Coding (HEVC), is supposed to be twice as efficient as H.264 in terms of compression. H.265 relies on the fact that processing power is increasing on our devices, and uses more processing power to achieve better compression. Now how will this affect our lives? Is this just an algorithmic improvement or will it have a tangible impact? Continue reading “The Ramifications Of H.265”

# Principal Component Analysis

Principal Component Analysis (PCA) is one of most useful tools in the field of pattern recognition. Let’s say you are making a list of people and collecting information about their physical attributes. Some of the more common attributes include height, weight, chest, waist and biceps. If you store 5 attributes per person, it is equivalent to storing a 5-dimensional feature vector. If you generalize it for ‘n’ different attributes, you are constructing an n-dimensional feature vector. Now you may want to analyze this data and cluster people into different categories based on these attributes. PCA comes into picture when have a set of datapoints which are multidimensional feature vectors and the dimensionality is high. If you want to analyze the patterns in our earlier example, it’s quite simple because it’s just a 5-dimensional feature vector. In real-life systems, the dimensionality is really high (often in hundreds or thousands) and it becomes very complex and time-consuming to analyze such data. What should we do now? Continue reading “Principal Component Analysis”

# Wavelet Analysis

Wavelets are actually a topic of pure mathematics. But over the last couple of decades, they have shown great promise and are now being adapted for a vast number of applications. They are used in image compression, molecular dynamics, seismology, physics, DNA analysis etc. One of the main advantages of wavelet analysis is the amount of information we can extract from a signal. Wavelet transforms are extensively used to analyze many different kinds of signals. So what exactly are these wavelets? Why is this method of analysis so powerful? Continue reading “Wavelet Analysis”

# Fourier Analysis

Most of the people in the tech field have already heard about Fourier analysis by now. Some of them love it, some of them hate it and the remaining few are just not sure what it is! Fourier analysis is one of the most fundamental tools in the field of engineering and technology. In fact, it is as fundamental as addition or multiplication. It is heavily used in signal processing, physics, speech analysis, image processing, cryptography and many more fields. Whenever people try to read and understand it, they are always hindered by all the mathematical equations. The actual explanation is drowned somewhere. Let’s see if we can fix that here. Continue reading “Fourier Analysis”

# Good Things Come In Small Packages

We encounter digital images everyday. We see a lot of JPEG files on our computers, cameras, phones and tablets. The actual images are huge and it should actually take up a lot of space to fit in all that data. But somehow our machines are able to compress all those images and store everything compactly. Ever wondered how it’s possible to fit so many images in such small space? How can the JPEG algorithm achieve so much reduction in size without visibly losing the image quality? Continue reading “Good Things Come In Small Packages”