In this article, we will look at the difference between Greedy and Dynamic Programming. These topics are very important in having various approaches to solve a given problem. This will allow us to choose which algorithm will be the best to solve the problem in minimum runtime. We will look at description of each along with their applications and compare them.
Greedy Method
Greedy Algorithm or Method is an algorithmic paradigm that builds up a solution piece by piece. Another word is solving a problem Step by Step, it always chooses the next solution that offers the most obvious and immediate benefit. So, the problems where choosing a locally optimal solution which then leads to a global solution is best suited to apply Greedy Method. The algorithm aims to achieve an Optimal Solution even though it’s always not possible. In simpler terms, Greedy approach is useful for problems where local optimality leads to overall optimality for the entire problem.
Applications
Greedy Method is easy to implement taking less time in finding an optimal solution for famous problems such as Job Sequencing, Activity Selection Problem and Fractional Knapsack Problem. It is also useful in finding solutions for NP-Hard problems such as Traveling Salesman Problem. A typical example of Greedy Algorithm is Selection Sort. Greedy Approach is also implied in finding Minimum Spanning Tree using Prim’s and Kruskal’s Method.
Dynamic Programming
Dynamic Programming is one of the most popular programming technique employed in optimizing a problem exhibiting properties of:
- Overlapping Subproblems and,
- Optimal Substructure.
A problem is said to have Overlapping subproblems if the problem can be broken down into subproblems that are reused several times. Optimal Substructure is the property of a problem is to greedily solve the subproblems and combine the solutions to form the optimal solution of the larger problem. It is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming.
The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later (Memoization). This simple optimization reduces time complexities from exponential to polynomial. Almost every problem that can be solved using recursion or backtracking can be also solved using Dynamic Programming. It gives the most optimal solution for problem as it is an optimization over backtracking which explores all possible choices.
Applications
Dynamic Programming is useful in solving problems where reliability is concerned. It brings out the most optimal solution in problems such as 0/1 Knapsack, LCS or Longest Common Subsequence, All Pairs Shortest Path Problem as well as Bellman-Ford Algorithm to find Single Source Shortest Path Problem. It is also useful in implementing Multi-Stage Graph.
Difference between Greedy and Dynamic Programming
Now, let us look at some key differences between the two based on our understanding and examples.
Criteria | Greedy Method | Dynamic Programming |
Idea | In a Greedy Algorithm, the choice which seems the best at the current step is chosen to build an optimal solution. | In Dynamic Programming, the decision made at each step is through considering the solution of the current problem and solution to previously solved subproblems to build a Global Optimal solution. |
Decision-Making | A Greedy Algorithm generates a single decision sequence. So, the result of algorithm may depend on choices made so far and not on future choices or all the solutions to the subproblem. | Whereas Dynamic Programming generates Many decision sequences as it considers all possible cases by evaluating subproblems at first, to get the most optimal result. |
Optimality | Here there is no such guarantee of an Optimal Solution unless Local Optimality leads to Global. | Dynamic Programming will always generate an Optimal Solution. |
Building Solutions | In Greedy Method, we compute the solution in a Serial or forward manner without revisiting or backtrack to the previous choices or solutions. | Dynamic Programming builds its solution following either a Bottom-Up or Top-Down approach by synthesizing them from smaller optimal sub solutions. |
Time Complexity | Greedy Methods are generally faster and efficient as seen in the example above. But they are less reliable. | Dynamic Programming is usually slower than the greedy method as it evaluates all possibilities. But, They are more reliable in getting the optimal solution. |
Space Complexity or Memory | It is efficient only in terms of memory as there is no option to look back or revisits previous choices. So, there is no need for extra space unless the program explicitly demands it. | It requires tabulation for memoization which increases its space complexity to store results of previous states. |