**What is Algorithm?**

An algorithm is defined as a step-by-step procedure or method for solving a problem.

Let us consider the problem of preparing a bajji. To prepare a bajji, we follow the given below steps given:

1) Get the frying pan.

2) Get the oil.

a. Do we have oil?

i. If yes, put it in the pan.

ii. If no, do we want to buy oil?

1. If yes, then go out and buy.

2. If no, we can terminate.

3) Turn on the stove, etc...

What we are doing is, for a given problem (preparing a bajji), we are providing a step-by-step procedure for solving it.

While defining an algorithm steps are written in human understandable language and independent of any programming language. We can implement it in any programming language of our choice. From the data structure point of view, following are some important categories of algorithms

**1. Sort -**Algorithm to sort items in a certain order (Either Ascending/Descending order).

**2. Search -**Algorithm to search an item in a data structure.

**3. Insert -**Algorithm to insert item in a data structure.

**4. Update -**Algorithm to update an existing item in a data structure.

**5. Delete -**Algorithm to delete an existing item from a data structure.

**Characteristics of an Algorithm**

Not all procedures can be called an algorithm. An algorithm should have the following characteristics

**1. Unambiguous**- Algorithm should be clear and unambiguous. Each of its steps and their inputs/outputs should be clear and must lead to only one meaning.

**2. Effectiveness**- An algorithm is also generally expected to be effective. This means that all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done exactly and in a finite length of time.

**3. Input -**An algorithm should have zero or more well-defined inputs.

**4. Output -**An algorithm should have 1 or more well-defined outputs, and should match the desired output.

**5. Finiteness -**Algorithms must terminate after a finite number of steps.

**6. Feasibility -**Should be feasible with the available resources.

**7. Independent -**An algorithm should have step-by-step directions, which should be independent of any programming code.

**How to Write an Algorithm?**

There are no well-defined standards or rules for writing algorithms. Rather, it is problem and resource dependent. Algorithms are never written to support a particular programming code. As we know that all programming languages share basic code constructs like loops (do, for, while), flow-control (if-else), etc. These common constructs can be used to write an algorithm.

Let's try to learn algorithm-writing by using an example.

**Problem – Write an algorithm to multiply two numbers and display the result.**

step 1 - START

step 2 - declare three integers a, b & c

step 3 - define values of a & b

step 4 – multiply values of a & b

step 5 - store output of step 4 to c

step 6 - print c

step 7 - STOP

We design an algorithm to get a solution of a given problem. There can be more than one solution to solve the given problem.

**Analysis of Algorithm**

**Why Analysis of Algorithm ?**

For example, to go from city “A” to city “B”, there can be many ways of accomplishing this: by flight, by bus, by train and also by bicycle. Depending on the availability and convenience, we choose the one that suits us. Similarly, in computer science, multiple algorithms are available for solving the same problem (for example, a sorting problem has many algorithms, like insertion sort, selection sort, quick sort and many more). Algorithm analysis helps us to determine which algorithm is most efficient in terms of time and space consumed.

**What is Analysis of Algorithm**

* Analyzing an algorithm has come to mean predicting the resources that the algorithm requires.

* Generally, by analyzing several candidate algorithms for a problem, a most efficient one can be easily identified.

* Analysis of Algorithm is required to decide which of the several algorithms is preferable.

* Analysis of algorithm is required to measure the efficiency of algorithm

* Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation.

* Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation.

**A Priori Analysis -**This is a theoretical analysis of an algorithm. Efficiency of an algorithm is measured by assuming that all other factors, for example, processor speed, are constant and have no effect on the implementation.**A Posterior Analysis -**This is an empirical analysis of an algorithm. The selected algorithm is implemented using programming language. This is then executed on target computer machine. In this analysis, actual statistics like running time and space required, are collected.
* We will compare algorithms based on their execution time. Efficiency of an algorithm means how fast it runs.

* If we want to measure the amount of storage that an algorithm uses as a function of the size of the instances, there is a natural unit available Bit.

* On the other hand, if we want to measure the efficiency of an algorithm in terms of time it takes to arrive at result, there is no obvious choice.

* This problem is solved by the principle of invariance, which states that two different implementations of the same algorithm will not differ in efficiency by more than some multiplicative constant.

* Suppose that the time taken by an algorithm to solve an instance of size n is never more than cn seconds, where c is some suitable constant.

* Practically size of instance means any integer that in some way measures the number of components in an instance.

* Sorting problem: size is no. of items to be sorted.

* Graph: size is no. of nodes or edges or both involved.

* We say that the algorithm takes a time in the order of n i.e. it is a linear time algorithm.

* If an algorithm never takes more than cn 2 seconds to solve an instance of size n, we say it takes time in the order of cn2 i.e. quadratic time algorithm.

* Polynomial : n power k, Exponential: c power n or n!

* If we want to measure the amount of storage that an algorithm uses as a function of the size of the instances, there is a natural unit available Bit.

* On the other hand, if we want to measure the efficiency of an algorithm in terms of time it takes to arrive at result, there is no obvious choice.

* This problem is solved by the principle of invariance, which states that two different implementations of the same algorithm will not differ in efficiency by more than some multiplicative constant.

* Suppose that the time taken by an algorithm to solve an instance of size n is never more than cn seconds, where c is some suitable constant.

* Practically size of instance means any integer that in some way measures the number of components in an instance.

* Sorting problem: size is no. of items to be sorted.

* Graph: size is no. of nodes or edges or both involved.

* We say that the algorithm takes a time in the order of n i.e. it is a linear time algorithm.

* If an algorithm never takes more than cn 2 seconds to solve an instance of size n, we say it takes time in the order of cn2 i.e. quadratic time algorithm.

* Polynomial : n power k, Exponential: c power n or n!

**Goal of the Analysis of Algorithms**

The goal of the analysis of algorithms is to compare algorithms (or solutions) mainly in terms of running time but also in terms of other factors (e.g., memory, developer effort, etc.)

**Algorithm Complexity**

Complexity of an algorithm is a measure of the amount of time and/or space required by an algorithm for an input of a given size (n). Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X are the two main factors, which decide the efficiency of X.

**1. Time Complexity**

Every algorithm requires some amount of computer time to execute its instruction to perform the task. Time complexity of an algorithm represents the amount of time required by the algorithm to run to completion. Time requirements can be defined as a numerical function T(n), where T(n) can be measured as the number of steps, provided each step consumes constant time.

For example, addition of two n-bit integers takes n steps. Consequently, the total computational time is T(n) = c * n, where c is the time taken for the addition of two bits. Here, we observe that T(n) grows linearly as the input size increases.

Generally, running time of an algorithm depends upon the following...

- Whether it is running on Single processor machine or Multi processor machine.
- Whether it is a 32-bit machine or 64-bit machine
- Read and Write speed of the machine.
- The time it takes to perform Arithmetic operations, logical operations, return value and assignment operations etc.,
- Input data

**Note**

When we calculate time complexity of an algorithm, we consider only input data and ignore the remaining things, as they are machine dependent. We check only, how our program is behaving for the different input values to perform all the operations like Arithmetic, Logical, Return value and Assignment etc.,

Calculating Time Complexity of an algorithm based on the system configuration is a very difficult task because, the configuration changes from one system to another system. To solve this problem, we must assume a model machine with specific configuration. So that, we can able to calculate generalized time complexity according to that model machine.

To calculate time complexity of an algorithm, we need to define a model machine. Let us assume a machine with following configuration.

- Single processor machine
- 32 bit Operating System machine
- It performs sequential execution
- It requires 1 unit of time for Arithmetic and Logical operations
- It requires 1 unit of time for Assignment and Return value
- It requires 1 unit of time for Read and Write operations

Now, we calculate the time complexity of following example code by using the above defined model machine.

Consider the following piece of code.

If any program requires fixed amount of time for all input values then its time complexity is said to be Constant Time Complexity.

Let us consider another example,

So above code requires '4n+4' Units of computer time to complete the task. Here the exact time is not fixed. And it changes based on the n value. If we increase the n value then the time required also increases linearly. Totally it takes '4n+4' units of time to complete its execution and it is Linear Time Complexity.

If the amount of time required by an algorithm is increased with the increase of input value then that time complexity is said to be Linear Time Complexity.

When we design an algorithm to solve a problem, it needs some computer memory to complete its execution. For any algorithm, memory is required for the following purposes.

When we want to perform analysis of an algorithm based on its Space complexity, we consider only Data Space and ignore Instruction Space as well as Environmental Stack. That means we calculate only the memory required to store Variables, Constants, Structures, etc.,

To calculate the space complexity, we must know the memory required to store different datatype values (according to the compiler). For example, the C Programming Language compiler requires the following.

If any algorithm requires a fixed amount of space for all input values then that space complexity is said to be Constant Space Complexity.

Let us consider the following piece of code.

If the amount of space required by an algorithm is increased with the increase of input value, then that space complexity is said to be Linear Space Complexity.

To analyze the given algorithm, we need to know with which inputs the algorithm takes less time (performing well) and with which inputs the algorithm takes a long time. In general, the first case is called the best case and the second case is called the worst case for the algorithm. To analyze an algorithm, we need some kind of syntax, and that forms the base for asymptotic analysis/notation. There are three types of analysis

* Defines the input for which the algorithm takes the least time (fastest time to complete).

* Best case of a given algorithm is considered when the resource usage is at least. Usually the resource being considered is running time.

* The term best case performance is used to describe an algorithm's behavior under optimal conditions.

* The best-case complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size n.

* In the best-case analysis, we calculate lower bound on running time of an algorithm.

* For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. In sorting problem, the best case occurs when the input elements are already sorted in required order.

Consider the following piece of code.

int multiply (int a, int b){ return a*b; }In above sample code, it requires 1 unit of time to calculate a*b and 1 unit of time to return the value. That means, totally it takes 2 units of time to complete its execution. And it does not change based on the input values of a and b. That means for all input values, it requires same amount of time i.e. 2 units.

**Note**If any program requires fixed amount of time for all input values then its time complexity is said to be Constant Time Complexity.

Let us consider another example,

int sum(int A[], int n){ int sum = 0, i; for(i = 0; i < n; i++ { sum = sum + A[i]; } return sum; }In above calculation

**Cost**is the amount of computer time required for a single operation in each line.**Reputation**is the amount of computer time required by each operation for all its repetitions.**Total**is the amount of computer time required by each operation to execute.So above code requires '4n+4' Units of computer time to complete the task. Here the exact time is not fixed. And it changes based on the n value. If we increase the n value then the time required also increases linearly. Totally it takes '4n+4' units of time to complete its execution and it is Linear Time Complexity.

**Note**If the amount of time required by an algorithm is increased with the increase of input value then that time complexity is said to be Linear Time Complexity.

**2. Space Complexity**When we design an algorithm to solve a problem, it needs some computer memory to complete its execution. For any algorithm, memory is required for the following purposes.

- Memory required to store program instructions
- Memory required to store constant values
- Memory required to store variable values
- And for few other things

**Instruction Space:**It is the amount of memory used to store compiled version of instructions.**Environmental Stack:**It is the amount of memory used to store information of partially executed functions at the time of function call.**Data Space:**It is the amount of memory used to store all the variables and constants.**Note**When we want to perform analysis of an algorithm based on its Space complexity, we consider only Data Space and ignore Instruction Space as well as Environmental Stack. That means we calculate only the memory required to store Variables, Constants, Structures, etc.,

To calculate the space complexity, we must know the memory required to store different datatype values (according to the compiler). For example, the C Programming Language compiler requires the following.

- 2 bytes to store Integer value,
- 4 bytes to store Floating Point value,
- 1 byte to store Character value,
- 8 bytes to store double value.

int square(int a){ return a*a; }In above piece of code, it requires 2 bytes of memory to store variable 'a' and another 2 bytes of memory is used for return value. This means, totally it requires 4 bytes of memory to complete its execution. And this 4 bytes of memory is fixed for any input value of 'a'. This space complexity is said to be Constant Space Complexity.

**Note**If any algorithm requires a fixed amount of space for all input values then that space complexity is said to be Constant Space Complexity.

Let us consider the following piece of code.

int sum(int A[], int n){ int sum = 0, i; for(i = 0; i < n; i++) { sum = sum + A[i]; } return sum; }In above piece of code it requires

- 'n*2' bytes of memory to store array variable 'a[]'
- 2 bytes of memory for integer parameter 'n'
- 4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
- 2 bytes of memory for return value.

**Note**If the amount of space required by an algorithm is increased with the increase of input value, then that space complexity is said to be Linear Space Complexity.

**Types of Analysis**To analyze the given algorithm, we need to know with which inputs the algorithm takes less time (performing well) and with which inputs the algorithm takes a long time. In general, the first case is called the best case and the second case is called the worst case for the algorithm. To analyze an algorithm, we need some kind of syntax, and that forms the base for asymptotic analysis/notation. There are three types of analysis

**1. Best Case*** Defines the input for which the algorithm takes the least time (fastest time to complete).

* Best case of a given algorithm is considered when the resource usage is at least. Usually the resource being considered is running time.

* The term best case performance is used to describe an algorithm's behavior under optimal conditions.

* The best-case complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size n.

* In the best-case analysis, we calculate lower bound on running time of an algorithm.

* For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. In sorting problem, the best case occurs when the input elements are already sorted in required order.

**2. Worst Case**
* Defines the input for which the algorithm takes the least time (fastest time to complete).

* Best case of a given algorithm is considered when the resource usage is at least. Usually the resource being considered is running time.

* The term best case performance is used to describe an algorithm's behavior under optimal conditions.

* The best-case complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size n.

* In the best-case analysis, we calculate lower bound on running time of an algorithm.

* For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. In sorting problem, the best case occurs when the input elements are already sorted in required order.

* Best case of a given algorithm is considered when the resource usage is at least. Usually the resource being considered is running time.

* The term best case performance is used to describe an algorithm's behavior under optimal conditions.

* The best-case complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size n.

* In the best-case analysis, we calculate lower bound on running time of an algorithm.

* For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. In sorting problem, the best case occurs when the input elements are already sorted in required order.

**3. Average Case**

* Average performance and worst-case performance are the most used in algorithm analysis.

* The average-case complexity of the algorithm is the function defined by the average number of steps taken on any instance of size n.

* In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by total number of inputs.

* For example, the average case for a simple linear search on a list occurs when the desired element is any element of the list. In sorting problem the average case occurs when the input elements are randomly arranged.

**Asymptotic Notation**

Whenever we want to perform analysis of an algorithm, we need to calculate the complexity of that algorithm. But when we calculate complexity of an algorithm it does not provide exact amount of resource required. So instead of taking exact amount of resource we represent that complexity in a general form (Notation) which produces the basic nature of that algorithm. We use that general form (Notation) for analysis process. Asymptotic notation of an algorithm is a mathematical representation of its complexity.

**Note**

In asymptotic notation, when we want to represent the complexity of an algorithm, we use only the most significant terms in the complexity of that algorithm and ignore least significant terms in the complexity of that algorithm (Here complexity may be Space Complexity or Time Complexity). For example, consider the following time complexities of two algorithms.

- Algorihtm 1: 6n2 + 3n + 2
- Algorihtm 2: 9n2 + 8n + 3

Here for larger value of 'n' the value of most significant terms (6n2 and 9n2) is very larger than the value of least significant terms (3n + 2 and 8n + 3). So, for larger value of 'n' we ignore the least significant terms to represent overall time required by an algorithm. In asymptotic notation, we use only the most significant terms to represent the time complexity of an algorithm. Majorly, we use THREE types of Asymptotic Notations and those are as follows

**1. Big - Oh (O)**

2. Big - Omega (

2. Big - Omega (

**Î©)**

3. Big - Theta (T)

3. Big - Theta (T)

**1. Big - Oh Notation (O)**

Big - Oh notation is used to define the upper bound of an algorithm in terms of Time Complexity. That means Big - Oh notation always indicates the maximum time required by an algorithm for all input values. That means Big - Oh notation describes the worst case of an algorithm time complexity.

Consider function f(n) the time complexity of an algorithm and g(n) is the most significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as O(g(n)). For example, if f(n) = n4 + 50n2 + 6n + 5 is the given algorithm, then n4 is g(n). That means g(n) gives the maximum rate of growth for f(n) at larger values of n. Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis.

In above graph after a particular input value n0, always C g(n) is greater than f(n) which indicates the algorithm's upper bound.

Consider the following f(n) and g(n).

f(n) = 3n + 2

g(n) = n

If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C x g(n) for all values of C > 0 and n0>= 1

f(n) <= C g(n) ?3n + 2 <= C n

Above condition is always TRUE for all values of C = 4 and n >= 2. By using Big - Oh notation we can represent the time complexity as 3n + 2 = O(n).

**2. Big - Omege Notation (**

**Î©)**

Big - Omega notation is used to define the lower bound of an algorithm in terms of Time Complexity. That means Big - Omega notation always indicates the minimum time required by an algorithm for all input values. That means Big - Omega notation describes the best case of an algorithm time complexity.

Consider function f(n) the time complexity of an algorithm and g(n) is the most significant term. If f(n) >= C x g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Î©(g(n)). For example, if f(n) = 100n2 + 10n + 50, g(n) is Î©(n2). Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis.

Consider the following f(n) and g(n).

f(n) = 3n + 2

g(n) = n

If we want to represent f(n) as Î©(g(n)) then it must satisfy f(n) >= C g(n) for all values of C > 0 and n0>= 1

f(n) >= C g(n) ?3n + 2 <= C n

Above condition is always TRUE for all values of C = 1 and n >= 1. By using Big - Omega notation we can represent the time complexity as 3n + 2 = Î©(n).

**3. Big - Theta Notation (Î˜)**

Big - Theta notation is used to define the average bound of an algorithm in terms of Time Complexity. That means Big - Theta notation always indicates the average time required by an algorithm for all input values. That means Big - Theta notation describes the average case of an algorithm time complexity.

Big - Theta Notation can be defined as "Consider function f(n) the time complexity of an algorithm and g(n) is the most significant term. If C1 g(n) <= f(n) >= C2 g(n) for all n >= n0, C1, C2 > 0 and n0 >= 1. Then we can represent f(n) as Î˜(g(n))."

Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis.

In above graph after a particular input value n0, always C1 g(n) is less than f(n) and C2 g(n) is greater than f(n) which indicates the algorithm's average bound.

Example

Consider the following f(n) and g(n).

f(n) = 3n + 2

g(n) = n

If we want to represent f(n) as T(g(n)) then it must satisfy C1 g(n) <= f(n) >= C2 g(n) for all values of C1, C2 > 0 and n0>= 1

C1 g(n) <= f(n) >=

Example

Consider the following f(n) and g(n).

f(n) = 3n + 2

g(n) = n

If we want to represent f(n) as T(g(n)) then it must satisfy C1 g(n) <= f(n) >= C2 g(n) for all values of C1, C2 > 0 and n0>= 1

C1 g(n) <= f(n) >=

⇒ C2 g(n)

C1 n <= 3n + 2 >= C2 n

Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 1. By using Big - Theta notation we can represent the time complexity as 3n + 2 = Î˜(n).

C1 n <= 3n + 2 >= C2 n

Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 1. By using Big - Theta notation we can represent the time complexity as 3n + 2 = Î˜(n).

**Guidelines for Asymptotic Analysis**

There are some general rules to help us determine the running time of an algorithm.

**1. Loops:**The running time of a loop is, at most, the running time of the statements inside the loop (including tests) multiplied by the number of iterations.

// executes n times for (int i=1; i<=n;i++) a = a +4; // constant time Total time = a constant c * n = c * n = O(n).

**2. Nested loops:**Analyze from the inside out. Total running time is the product of the sizes of all the loops.

// outer loop executed n times for (int i=1; i<=n;i++) // inner loop executed n times for (int j=1; j<=n;j++) a = a +4; // constant time Total time = c * n * n= cn2 = O(n2).

**3. Consecutive statements:**Add the time complexities of each statement.

a = a + 2; // constant time // executes n times for (int i=1; i<=n;i++) m = m * 2; // constant time // outer loop executed n times for (int i=1; i<=n;i++) { // inner loop executed n times for (int j=1; j<=n;j++) a = a +4; // constant time } Total time = c0 +c1 n + c2n2 = cn2 = O(n2).

**4. If-then-else statements:**Worst-case running time: the test, plus either the then part or the else part (whichever is the larger).

//test: constant //test: constant if(length() == 0) { return false; // then part: constant } else { // else part: (constant + constant) * n for(int i = 0; i<length();i++) { //another if: constant + constant (no else part) if(!list[i].equals(otherList.list[i])) // constant return false; } }

**5. Logarithmic complexity:**An algorithm is O(logn) if it takes a constant time to cut the problem size by a fraction (usually by 1/2). As an example, let us consider the following program.

for(int i=0; i<=n;) i = i *2;If we observe carefully, the value of i is doubling every time. Initially i = 1, in next step I = 2, and in subsequent steps i = 4,8 and so on. Let us assume that the loop is executing some k times. At kth step 2k = n, and at (k + 1) th step we come out of the loop. Taking logarithm on both sides, gives

log(2k) = logn klog2=logn k=logn // if we assume base 2 Total time = O(logn).Similarly, for the case below, the worst case rate of growth is O(logn). The same discussion holds good for the decreasing sequence as well.

for(int i=0; i<=n;) i = i/2;

**Recursive Algorithm**

In computer science, all algorithms are implemented with programming language functions. We can view a function as something that is invoked (called) by another function. It executes its code and then returns control to the calling function. Here, a function can call themselves (by itself) or it may call another function which again call same function inside it.

The function which calls by itself is called as Direct Recursive function (or Recursive function). A recursive algorithm can also be defined as follows.

The function which calls a function and that function calls its called function is called Indirect Recursive function (or Recursive function).

**Running Time Analysis**

Running Time Analysis is the process of determining how processing time increases as the size of the problem (input size) increases. Input size is the number of elements in the input, and depending on the problem type, the input may be of different types. The following are the common types of inputs.

- Size of an array
- Polynomial degree
- Number of elements in a matrix
- Number of bits in the binary representation of the input
- Vertices and edges in a graph.

Total cost = cost of the car + cost of the bicycle Total cost ≈ cost of the car (approximation...)For the above-mentioned example, we can represent the cost of the car and the cost of the bicycle in terms of function, and for a given function ignore the low order terms that are relatively insignificant (for large value of input size, n). As an example, in the case below, n4, 2n2, 100n and 500 are the individual costs of some function and approximate to n4 since n4 is the highest rate of growth.

`n4+2n2+100n+500 ≈ n4`

**Next Tutorial Introduction to Data Structures**

## No comments:

## Post a Comment