**Asymptotic Notations Definition**

Asymptotic notations are languages that allow us to analyze an algorithm’s running time by identifying its behavior as the input size for the algorithm increases. It is also known as the algorithm’s growth rate. Asymptotic notations give us methods for classifying functions according to their rate of growth.

**Standard Notations**

Notation |
Meaning |
---|---|

T(n) = O(f(n)) | Asymptotically, T does not grow faster than f. |

T(n) = Omega(f(n)) | Asymptotically, T does not grow slower than f. |

T(n) = Theta(f(n)), when T(n) = O(f(n)) and T(n) = Omega(f(n)); |
Asymptotically, T grows equally fast as f. |

T(n) = o(f(n)), means that lim_{n ->infinity}T(n)/f(n)=0 |
Asymptotically, T grows slower than f. |

T(n) = omega(f(n)), means that lim_{n ->infinity} T(n)/f(n)=infty. |
Asymptotically, T grows faster than f. |

## Common Terminologies Used

Terminology |
Meaning |
---|---|

f(n) = O(1) | f is constant. |

f(n) = O(log n) | f is logarithmic.; |

f(n) = O(n) | f is linear. |

f(n) = O(n^2) | f is quadratic. |

f(n) = O(n^3) | f is cubic. |

f(n) = O(a^{g(n)}), for some a > 0 and g(n) = omega(log n), sometimes it is understood that g(n) = Omega(n) |
f is exponential |

f(n) = O(n^a), for some constant a < infinity | f is polynomial. |

f(n) = o(n) ; | f is sublinear. |

**Best, Worst and Average-case Complexity**

**Worst Case **

Function defined by maximum number of steps taken in any instance of size n. This represents a curve passing through the highest point in each column.

**Best case**

Function defined by minimum number of steps taken in any instance of size n. This represents curve passing through the lowest point of each column.

**Average case**

Function defined by the average number of steps over all instances of size n.

**Types of Notation’s for Complexity**

- Big ‘O’ notation.
- Omega ‘Ω’ notation.
- Theta ‘Θ’ notation.
- Small ‘o’ notation.

### Big O Notation

- This expresses the complexity of an algorithm.
- An algorithm whose complexity does not change with the input size is O(1).
- Algorithm with O(1) is said to have constant time complexity, it means it takes the same amount of time even if the input size is doubled, tripled, or increased to any level.
- If ‘N’ is the size of the input, the complexity of an algorithm is O(N).
- The linear time method is O(N).

**Example**

If N is 100, it takes 100 seconds to process 100 elements, If N is 200, it takes 200 seconds to process 200 elements, then the algorithm is said to be O(N).

- The quadratic-time method is O(N
^{2}). - If an algorithm’s time requirement quadratically increases as “N” increases, the complexity of the algorithm is O(N2).

**Note**

- Big-O expressions do not have constants or low-order terms. This is because, when N gets large enough, constants and low-order terms don’t matter.

**Example:**

O(N2+100) is equivalent to O(N2) /* ignore 100 O(N2+N) is equivalent to O(N2) /* if N is very large, ignore it.