### What is Dynamic Programming?

If you’ve ever heard the term dynamic programming being thrown around and wondered to yourself what that term meant or how to use dynamic programming, this article aims to alleviate those concerns and develop a basic understanding.

Imagine the following: If you were given a handful of marbles and asked to figure out how many you had, you would have to count them one by one in order to figure out what the total was. Let’s assume that we counted ten marbles. Now what would we do if one more marble was given to you, we wouldn’t count our marbles again from one to eleven, instead we would add one to our previous total and see that we now had eleven marbles in total. This example of how we used information we already calculated to help us solve the next problem without having to redo our initial calculations is dynamic programming in a nutshell.

Dynamic programming refers to a technique to solve specific types of problems, namely those that can be broken down to overlapping subproblems, which can then be optimized. Unlike divide and conquer, where the problem (e.g. merge sort) is broken down into smaller unrelated subproblems, which are solved and then recombined to answer the original problem, dynamic programming requires problems with subproblems that relate or overlap. Using our marbles example, after we counted our initial batch of marbles the following totals were all related because each time we receive more marbles we can use the previous total and add the new amount to update our current total without having to count them all over again. I will go into more detail using the following popular coding interview question.

### A Popular Coding Interview Question

Imagine you are a professional thief and you’ve broken into an art museum. The security will only go off if adjacent paintings are taken off a wall. Given an array of numbers representing the value of each painting determine the maximum value of stolen paintings. In the following array of painting values: [3,5,6,2,1,8], the maximum heist value would be from 3,6,8 for a total of 17. If you would like a moment to try this problem stop reading here, otherwise, the solution is in the following sections.

**Setting up the Problem**

The first step to solving a problem with dynamic programming is to figure out the overlapping subproblems we are trying to solve. In our case, the subproblems would be which paintings to steal starting from arrays of varying lengths from one to infinity. So if we only had one or two paintings, we would know which one to steal. But if we had three, four, five or more paintings the subproblem is to determine which ones would allow us to walk away with the most value. With these subproblems in hand, we are now going to set up a brute force method to test all possibilities so that we can continue to optimize our solution.

As we stand in front of each painting, we can either steal the current painting or leave it. If we steal the current painting then we can’t steal the adjacent painting(s) and in essence we would have either the even or the odd indexed paintings. However, there is one more possibility that we need to account for, take the following array of numbers: [1, 0, 0, 4, 0]. If we took #1 skipped two paintings and then took #4, we would no longer be confined to just odd or even paintings. If we made jumps more than two we would always end up skipping a painting unnecessarily (if we took #1 from the previous array and then skipped three paintings to go to #5, we would have skipped #3 unnecessarily).

The final step to setting up our dynamic programming solution is to relate all of our subproblem solutions to each other. So in order to figure out the most valuable action as we stand in front of each painting, we need to decide whether we should steal the current painting and the next non adjacent painting ([1,2,3] #1 and #3) or if we should skip the current one and steal the adjacent painting. Using the information we’ve discussed so far and to emphasize the value of our next step, I’ve included a naive recursive solution in JavaScript. After finding the relation between our subproblems, we can typically choose between the top down approach with memoization and bottom up with tabulation to solve our problem but for the sake of completeness both methods will be covered in this article.

const stealRecursive = (nums, i = nums.length - 1) => { //Base case if (i < 0) { return 0; } //Recursion to cover all possibilities return Math.max(nums[i] + stealRecursive(nums, i - 2), stealRecursive(nums, i - 1)); } //The current time and space complexity are O(n²) and O(1) respectively // but we can further reduce the time complexity to O(n) by storing our solution.

**Memoization (Top Down)**

Memoization, commonly confused for dynamic programming, simply refers to a technique, we previously mentioned to save a result so that we can use the saved result instead of recalculating it. A top down approach is one that takes a larger problem and breaks it down into smaller subproblems. To put it all together, a top down approach with memoization will start with the whole problem, solve a subproblem once and then save this answer so that if the subproblem comes up again the answer doesn’t need to be recalculated. While this may seem very difficult, our previous, highly inefficient recursive solution is only one minor tweak from being a decent answer to both our thieving algorithm and a top down approach with memoization.

//Added an array for memoization const memo = [] const stealTopDown = (nums, i=nums.length-1) => { if (i < 0) { return 0; } //checks if solution already exists before calculating if (memo[i] >= 0) { return memo[i]; } let result = Math.max(nums[i] + stealTopDown(nums, i - 2), stealTopDown(nums, i - 1)); //saves result for future use memo[i] = result; return result; } //Time Complexity: O(n) //Space Complexity: O(n)

**Tabulation (Bottom Up)**

Our previous two blocks of code are recursive solutions for our thieving problem, the latter being more efficient than the other. Another approach that deals away with recursion is calculating iteratively (e.g. for loop) from the bottom up, solving the smallest subproblem first then moving to the larger ones. Similar to memoization, tabulation will also store the previous solutions to our subproblems in a table. The main difference lies in how the solution was calculated (recursively vs. iteratively) and how the solution will be used. The bottom up approach with tabulation starts at the smallest subproblem and will build on the previous answer (stored in a table) to eventually reach the answer that we are looking for.

const stealBottomUp = nums => { let tab = [0, nums[0]]; for (let i = 1; i < nums.length; i++) { tab[i + 1] = Math.max(nums[i] + tab[i - 1] , tab[i]); } return tab[nums.length]; } //Time Complexity: O(n) //Space Complexity: O(n)

In order to set up our tabulation approach we first need a table with previous answers to build on. We start with the zero-th case where we have no paintings, in which case our maximum profit will be zero, followed by the first case where we have only one painting and we would automatically take that painting. If we shift all references to the tab array over by one we will notice that the code inside the for loop is the same code we have been using for our recursive solutions. Our tabulation method is offset by one because we start with a zero in our tab array so that when we check our first non adjacent solution it will not be undefined.

**Final Optimized Solution**

After working through the problem with all of these different methods we can make one final optimization to the space complexity. We mentioned in our setup that at most we would go two spaces away to check our maximum value, so instead of creating an array to save all of our previous solutions we can save just the previous two.

const stealOptimized = (nums) => { let prevMax = 0; let twoAwayMax = 0; nums.forEach(num => { //Destructuring assignment to swap values [prevMax, twoAwayMax] = [Math.max(num + twoAwayMax, prevMax), prevMax]; }) return prevMax; } //Time Complexity: O(n) //Space Complexity: O(1)

**Conclusion**

To summarize the steps to dynamic programming:

1. Set up our problem: The three things that we need to accomplish is to determine what the subproblems are, include all of the possibilities, and relate all of our subproblem solutions to each other.

2. Memoization or Tabulation: We can pick either the recursive top down approach with Memoization or the bottom up Tabulation method. Both of these will arrive at a similar solution and can be picked if either is easier to conceptualize or if recursion is preferred.

3. Solving the Problem: After going through memoization or tabulation we should have a working algorithm but we can also make any final optimizations or tweaks if specific outputs are needed.

Source: medium