Dynamic Programming

Most of the techies have come across this concept one time or the other. People know that it’s really good and very useful, but not a lot of them know how exactly it works and why it works in the first place! Let’s say you are presented with a big box of precious stones with different sizes and weights. You have a bag with you which can only hold a limited weight. So obviously you can’t take everything. In particular, you’re constrained to take only what your bag can hold. Let’s say it can only hold W pounds. You also know the market value for each of those stones. Given that you can only carry W pounds, what stones should you pick in order to maximize your profit?  

Why do we need dynamic programming?

For example, consider a box where you have 4 stones worth $1400, $3000, $4200 and $7100. They weigh 2, 5, 6 and 10 ounces respectively. Your bag can only hold 11 ounces. Your first instinct might be to take the highest valued stone. It weighs 10 ounces, which means it will fit into your bag and you get $7100 profit. But if you look at it carefully, the best choice would be to pick the two stones worth $3000 and $4200. Their combined weight is 11 ounces, which means they will fit into your bag and you will get a profit of $7200. This is a very simple example. In real life, the number of possible options will go into billions. You cannot sit and check every single option. So how do we do it efficiently?

Dynamic programming helps us in solving the problem we faced above. Since we had only 4 stones, we just inspected all the options and picked the one which maximized our profit. Let’s say you had 100 stones. If you calculate the total number of options you have to fit them into a reasonably sized bag, you will be surprised at the mind boggling number you will encounter (it goes into billions). It means that there are a billion different combinations of stones that you can pick. Dynamic programming is heavily used in computer networks, routing, graph problems, computer vision, artificial intelligence, machine learning etc.

Where is it used in real life?

In order to introduce the dynamic-programming approach to solving real life problems, let’s consider a traffic based problem. You can take many different routes from your apartment to your office. Let’s say you encounter many intersections on your way. At each intersection, you can take either a left, go straight or turn right. Each intersection has different amount of wait times depending on the traffic in that region. Now you have to optimize your route such that there is minimum overall delay. The most naive approach to solving the problem would be to enumerate all possible paths, and select the path that gives the smallest delay.

Dynamic programming reduces the number of computations by moving systematically from one end to the other, building the best solution as it goes. The way it solves the problem is by moving backwards from the office to your apartment. If we are in any intersection with no further intersections to go, we have no decision to make and simply incur the delay corresponding to that intersection. In the next intersection (second to last), we only have to decide the best possible route until the next intersection because we have already found out the best way to get from the last intersection to the office. This way, we are not recomputing anything. We continue the same approach till we reach the apartment. Hence, by this time, we would have built the best possible path from our apartment to the office.

What exactly is dynamic programming?

Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems and optimal substructure. As seen from the above example, this method takes far less time than naive methods. Dynamic programming is actually both a mathematical optimization method and a computer programming method. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler subproblems in a recursive manner. If subproblems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the subproblems.

The key idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem, called subproblems, then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations. Once the solution to a given subproblem has been computed, it is stored. The next time the same solution is needed, it is simply looked up. In the above example, we didn’t have to recompute anything. Once we find an optimal path from an intersection to the office, we store that and use it the next time we need to compute the delay from a previous intersection. This approach is especially useful when the number of repeating subproblems grows exponentially.

How does it solve problems?

To put it crisply, dynamic programming follows the approach given below:

  1. Define a small problem and find an optimum solution to this problem. In our example, this is the same as finding the best path from the last intersection to the office.
  2. Enlarge this small problem slightly and find the optimum solution to the new problem using the previously found optimum solution. This step is the same as moving from the last intersection to the second last.
  3. Continue with step 2 until you have enlarged sufficiently that the current problem encompasses the original problem. When the problem is solved, the stopping conditions will have been met. This would be equivalent to going all the way from the last intersection to the first intersection.
  4. Track back the solution to the whole problem from the optimum solutions to the small problems solved along the way.

Is it the same as divide and conquer?

There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping subproblems. However, when the overlapping problems are much smaller than the original problem, the strategy is called “divide and conquer” rather than “dynamic programming”. This is why mergesort, quicksort, and finding all matches of a regular expression are not classified as dynamic programming problems.

Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its subproblems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such optimal substructure. Overlapping subproblems means that the space of subproblems must be small. It means that any recursive algorithm solving the problem should solve the same subproblems over and over, rather than generating new subproblems. If you generate new subproblems, then it would be the same as checking all possible options.

The origin of the term “dynamic programming” has very little to do with writing code actually. It was first coined by Richard Bellman in the 1950s, a time when computer programming was practiced by very few people. Back then, programming meant “planning,” and “dynamic programming” was conceived to optimally plan multistage processes.

———————————————————————————————————–

One thought on “Dynamic Programming

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s