Recent Posts

Thursday, 28 December 2017

Recursion and Backtracking Tutorial

What is Recursion?
     Any function or method which calls itself is called Recursive or Recursion. A recursive method solves a problem by calling a copy of itself to work on a smaller problem. This is called the recursion step. The recursion step can result in many more such recursive calls.
     Note that it is important to ensure that the recursion must be terminates at one point of time. It will be easier for those who have seen the movie Inception. Leonardo had a dream, in that dream he had another dream, in that dream he had yet another dream, and that goes on. So it's like there is a function called dream (), and we are just calling it in itself.
public void dream() {
   System.out.println("Inside Dreaming");
   dream();
}
     Recursion is useful in solving problems which can be broken down into smaller problems of the same kind. But when it comes to solving problems using Recursion there are several things to be taken care of. Let's take a simple example and try to understand those. Following is the pseudo code of finding factorial of a given number X using recursion.
function factorial (n) 
   if n is 0                  // base case 
      return 1 
   return n*factorial (n-1)   // break into smaller problem(s)
     The following image shows how it works for factorial (5).

Base Case
     There must be at least one base case or condition, such that, when this condition is met the function stops calling itself recursively. Any recursive method must have a terminating condition. Terminating condition is one for which the answer is already known and we just need to return that. For example, for the factorial problem, we know that factorial (0) = 1, so when x is 0 we simply return 1, otherwise we break into smaller problem i.e. find factorial of x-1. If we don't include a Base Case, the function will keep calling itself, and ultimately will result in stack overflow. For example, the dream () function given above has no base case. If you write a code for it in any language, it will give a runtime error.
     There is an upper limit to the number of recursive calls that can be made. To prevent this, make sure that your base case is reached before stack size limit exceeds.

Why Recursion?
     Recursion is a useful technique borrowed from mathematics. Recursive code is generally shorter and easier to write than iterative code. Generally, loops are turned into recursive functions when they are compiled or interpreted.
     Recursion is most useful for tasks that can be defined in terms of similar subtasks. For example, sort, search, and traversal problems often have simple recursive solutions.

Recursion and Memory (Visualization)
     Each recursive call makes a new copy of that method (actually only the variables) in memory. Once a method ends (that is, returns some data), the copy of that returning method is removed from memory. The recursive solutions look simple but visualization and tracing takes time. For better understanding, let us consider our factorial function. The visualization of factorial function with n=4 will look like


Recursion versus Iteration
     While discussing recursion, the basic question that comes to mind is: which way is better? iteration or recursion? The answer to this question depends on what we are trying to do. A recursive approach mirrors the problem that we are trying to solve. A recursive approach makes it simpler to solve a problem that may not have the most obvious of answers. But, recursion adds overhead for each recursive call (needs space on the stack frame).

Recursion
  • Terminates when a base case is reached.
  • Each recursive call requires extra space on the stack frame (memory).
  • If we get infinite recursion, the program may run out of memory and result in stack overflow.
  • Solutions to some problems are easier to formulate recursively.

Iteration
  • Terminates when a condition is proven to be false.
  • Each iteration does not require extra space.
  • An infinite loop could loop forever since there is no extra memory being created.
  • Iterative solutions to a problem may not always be as obvious as a recursive solution.
Note
  • Recursive algorithms have 2 types of cases, base cases and recursive cases.
  • Every recursive function case must terminate at a base case.
  • Generally, iterative solutions are more efficient than recursive solutions [due to the overhead of function calls].
  • A recursive algorithm can be implemented without recursive function calls using a stack, but it’s usually more trouble than it's worth. That means any problem that can be solved recursively can also be solved iteratively.
  • For some problems, there are no obvious iterative algorithms.
  • Some problems are best suited for recursive solutions while others are not. 
Example Algorithms of Recursion
  • Fibonacci Series
  • Factorial Finding
  • Towers of Hanoi
  • Merge Sort, Quick Sort
  • Divide and Conquer Algorithms
  • Binary Search
  • Tree Traversals and many Tree Problems: InOrder, PreOrder and PostOrder techniques
  • Graph Traversals: DFS [Depth First Search] and BFS [Breadth First Search]
  • Dynamic Programming Examples
  • Backtracking Algorithms  
What is Backtracking?
     Recur­sion is the key in back­track­ing pro­gram­ming. As the name sug­gests we back­track to find the solution. In backtracking, we start with one possible option out of many available options and try to solve the problem if we are able to solve the problem with the selected move then we will print the solution else we will backtrack and select some other option and try to solve it and so on. If none if the options work out we will claim that there is no solution for the problem.
     Backtracking is a form of recursion. The usual scenario is that you are faced with a number of options, and you must choose one of these. After you make your choice you will get a new set of options; just what set of options you get depends on what choice you made. This procedure is repeated over and over until you reach a final state. If you made a good sequence of choices, your final state is a goal state; if you didn’t, it isn’t.
     Backtracking can be thought of as a selective tree/graph traversal method. The tree is a way of representing some initial starting position (the root node) and a final goal state (one of the leaves). Backtracking allows us to deal with situations in which a raw brute-force approach would explode into an impossible number of options to consider. Backtracking is a sort of refined brute force. At each node, we eliminate choices that are obviously not possible and proceed to recursively check only those that have potential.

Example Algorithms of Backtracking
1. N Queens Problem
2. Sudoku
3. The Knapsack Problem
4. Graph Coloring Problem
5. Hamiltonian Cycles.

Next Tutorial : Single Linked List Tutorial

Previous Tutorial  Introduction to Data Structures

No comments:

Post a Comment