Dynamic programming in solving Minimum Cost problem

Although there were many sources available on the Internet about this dynamic programming, it was most difficult to understand. Thus, Algo.Monster is going to track the output of every input change and fiddled with the code to give a better explanation. So, this article is to help people to understand dynamic programming more easily.

Basically, this is a basic introduction to dynamic programming. Newbie programmers who are familiar with the brute force method will get a general idea of it better.  

How do you definite dynamic programming?

Dynamic programming or sometimes dynamic optimization is an effective way to solve complex problems. How does it solve the problems? Well, firstly, it cut the problems into smaller pieces of subproblems. Then, you can solve each subproblem and store their results. Instead of having to recalculate the solution each time, one can simply look up the previous solution. This saves computation time and minimizes storage space (not always). According to Wikipedia, each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, to facilitate its lookup.

In other words, it is a technique that saves intermediate results from a calculation in any format so that you can use them in further computations instead of repeating the calculations every time an input comes in. Well, it sounds easy, right? In fact, it is not that hard once we look at the example to illustrate it.

Analyzing minimum cost problems with dynamic programming

Given a cost matrix cost [][] with a position (m, n) in cost [][], write a function that returns the cost of the minimum cost path to reach (m, n) starting at (0, 0). Each cell in the matrix is a cost for traversing that cell. And the sum of all costs associated with a path to reach (m-n) is the total cost. Well, this includes both the source and destination. Here, you can either traverse down or right and diagonally lower cells from another cell. For example, from (i, j), you can traverse cells of (i+1, j), (i, j+1) and (i+1, j+. Say all costs here are positive integers.

123
482
153

The solution that uses DP

Me: How would you solve that problem?

Friend Ah, I remember reading properties that talked about saving sub-problems to make computations more efficient.

Me: Which sub-problems are there?

Friend: It’s too hard.

Me: We will not work together.

What would you do if it were your normal way of solving it?

You will probably be looking at all the cells and adding up the cost as you travel to your destination. You don’t know if the minimum cost is sufficient so you add more by going in another direction. After considering all possible routes, you might find the most affordable cost.

Friend: However, the above solution only works for one destination. What if we need to calculate the total cost of reaching another destination?

Me: Excellent!

Friend We store the minimum cost to reach each cell in an array starting from the first cell. So when all cells are filled, we’ll have the solutions for every cell.

Friend: I didn’t quite follow you there.

Me: Let me finish.

What have we known till now?

There is an array that allows us to move left, right, diagonally lower and down.

Print the lowest cost route to reach the destination.

We start in the first cell.

Let’s cut down the problem now. 

Me: How much does it cost to move from [0.0] to [0.1]?

Friend: It’s 2 plus 1, and you get 3.

Me: How much does it cost to move from [0.0] to [0.1]?

Friend: Still 3, because we cannot move left, right, or diagonally lower. We can only get [0-1] from [0-0].

Me: Excellent! What is the minimum cost to move from [0-1] to [0-2]?

Friend: Ah, but the cost to move from [0.0] to to [0.1] to 3. [0.1] to to [0.2] would cost 3 + the cost to get to (0.2) which is 3 + 3. = 6.

Me: Can we now store the cost of each cell in a separate array so we can return them when the cell is at its destination?

Friend: Good point.

Look at the table below

Let’s code it up here now. (n is the row number)

minarr[0][0] = a[0][0];

for(i=1;i<n;i++)

  minarr[0][i] = minarr[0][i-1] + a[0][i];

We will transfer the first element, as it is, to the min-cost table. Then we will calculate the cost for the other elements in the row.

Friend: Why not use the columns? The first column’s cells can only be reached by going down from the first.

Me: Yes, that’s our next move.

Similar to the above, we add the cells with the previous ones in each column to calculate the cost of reaching them. 

Here, let’s code it up as well. (m is column numbers)

for(i=1;i<m;i++)

  minarr[i][0] = minarr[i-1][0] + a[i][0];

Friend: How do we determine the minimum cost of reaching the cell [1,1]?

Me: We already know that we can only move left, down, or diagonally lower than a cell. We can only reach cell [1,1] from cell [0.0] or [0.1] or[1,0]. Now that we have the minimum values for reaching [0,0],[0,1], or [1,0], our result is simply the sum of these three values and the cost to reach cell [1,1].

Then the cost to reach cell [1,1] = a[1,1] + min(minarr[0,0], minarr[0,1], minarr[1,0])

 minarr[1,1] = 8 + min(1,3,5)

 So, minarr[1,1] = 9

Again, for all cells beginning with [1,1].

Also, we will code it up like before,

for(i=1;i<n;i++)

  for(j=1;j<m;j++)

minarr[i][j] = (a[i][j] + min(minarr[i-1][j-1],minarr[i][j-1],minarr[i-1][j]));

Conclusion

Once we fill up our table, we can simply return the cell corresponding to the destination at the lowest cost array. Thus, we have now built our solution by looking at each element one-by-one. In dynamic programming, we call it the tabulation method. And this method is a bottom-up approach. Also, memoization is another way to do it. That will be our next topic.