The Concept Of Homogeneous Coordinates

1 mainPeople in computer vision and graphics deal with homogeneous coordinates on a very regular basis. They are actually a nice extension of standard three dimensional vectors and allow us to simplify various transforms and their computations. When I say “transformations”, I am talking about all those special effects on the screen, and the corresponding movements and scaling of various objects. But why do we need homogeneous coordinates to do all that? Why can’t we just move the objects around? Well, we can’t directly do that, not easily anyway! This will become clear soon. The concept of homogeneous coordinates is fundamental when we talk about cameras. In order to design our algorithms, we need to understand how the cameras are looking at the real world. This is in fact utilized heavily by game programmers as well. So what is it all about? Why is it so important?   Continue reading

Understanding Camera Calibration

1 main copyCameras have been around for a long time now. When cameras were first introduced, they were expensive and you needed a good amount of money to own one. However, people then came up with pinhole cameras in the late 20th century. These cameras were inexpensive and they became a common occurrence in our everyday life. Unfortunately, as is the case with any trade off, this convenience comes at a price. These pinhole cameras have significant distortion! The good thing is that these distortions are constant and they can be corrected. This is where camera calibration comes into picture. So what is this all about? How can we deal with this distortion?   Continue reading

What Is Gamma Correction?

1 mainGamma correction is an integral part of all the digital imaging systems, but a lot of people don’t know about it! It is an essential part of all the imaging devices like cameras, camcorders, monitors, video players, etc. It basically defines the relationship between a pixel’s numerical value and its actual luminance. Now wait a minute, why would they be different? Isn’t a pixel’s numerical value supposed to be exactly the same as its luminance? Well, not really! Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes. If we fully understand how gamma works, we can improve our exposure technique, along with making the most of image editing. So what is it all about? Why do we need gamma correction at all?   Continue reading

Recognizing Shapes Using Point Distribution Models

1 mainIn the field of computer vision, we often come across situations where we need to recognize the shapes of different objects. Not only that, we also need our machines to understand the shapes so that we can identify them even if we encounter them in different forms. Humans are really good at these things. We somehow make a mental note about these shapes and create a mapping in our brain. But if somebody asks you to write a formula or a function to identify it, we cannot come up with a precise set of rules. In fact, the whole field of computer vision is based on chasing this hold grail. In this blog post, we will discuss a particular model which is used to identify different shapes.   Continue reading

Understanding Gabor Filters

1 mainIn the field of image processing, filters play an extremely important role. If you don’t know what a filter is, you may quickly want to read the wiki article and come back. Otherwise, this post will not make much sense to you. All image processing operations can be viewed as applying a series of filters to an image and transforming it in some way. We do this for a variety of reasons, like understanding the content of images, transforming the images into another domain, detecting the presence of something in the images, and so on. Gabor filter is a particular type of filter, and it happens to be an important one. If you google “Gabor filter”, you will get a lot of articles. So in this post, rather than looking at the mathematics behind it, we will try to understand the underlying concept. Let’s go ahead, shall we?   Continue reading

Histogram Equalization Of RGB Images

contrastWhen you capture an image using your phone in the evening without flash, do you see how the image is a bit on the darker side? When you take an image with too many lights around you, the images becomes a bit too bright. Neither of the two situations gives us a good quality picture. The human eye likes contrast in images. What it means it that the bright regions should be really bright and the dark regions should be really dark. Almost all the apps and services you use today include this functionality, at least the good ones do. So how do we take one of these dark (or bright) images and improve its quality?   Continue reading

Purkinje Effect

mainEver wondered why the colors seem to change at night? For example, if you look at an air painting, you can see how the colors of objects look radically different in very low light just before dawn or dusk. Consider a red rose, for instance. We know that the flower’s petals are bright red against the green of the leaves in daylight. But, take a look at dusk and you will see that suddenly the contrast is reversed, with the red flower petals now appearing dark red or dark warm gray, and the leaves appearing relatively bright. Bright red doesn’t remain bright red anymore, and green doesn’t remain green either. They all become a bit monochromatic and it becomes difficult to separate them. Why does this happen?   Continue reading