Time complexity in DSA is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
An algorithm’s running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size.
The efficiency of an algorithm depends on two parameters
- Time Complexity
- Space Complexity
Time Complexity: It is defined as the number of times a particular instruction set is executed rather than the total time is taken. It is because the total time took also depends on some external factors like the compiler used, processor’s speed, etc.
Space Complexity: It is the total memory space required by the program for its execution.
Name | Running time (T(n)) | Example algorithms |
constant time | O(1) | Finding the median value in a sorted array of numbers Calculating (−1)n |
inverse Ackermann time | O(α(n)) | Amortized time per operation using a disjoint set |
iterated logarithmic time | O(log* n) | Distributed coloring of cycles |
log-logarithmic | O(log log n) | Amortized time per operation using a bounded priority queue |
logarithmic time | O(log n) | Binary search |
polylogarithmic time | poly(log n) | |
fractional power | O(nc) where 0 < c < 1 | Searching in a kd-tree |
linear time | O(n) | Finding the smallest or largest item in an unsorted array, Kadane’s algorithm, linear search |
“n log-star n” time | O(n log* n) | Seidel’s polygon triangulation algorithm. |
linearithmic time | O(n log n) | Fastest possible comparison sort; Fast Fourier transform. |
quasilinear time | n poly(log n) | |
quadratic time | O(n2) | Bubble sort; Insertion sort; Direct convolution |
cubic time | O(n3) | Naive multiplication of two n×n matrices. Calculating partial correlation. |
polynomial time | 2O(log n) = poly(n) | Karmarkar’s algorithm for linear programming; AKS primality test |
quasi-polynomial time | 2poly(log n) | Best-known O(log2 n)-approximation algorithm for the directed Steiner tree problem. |
sub-exponential time | O(2nε) for all ε > 0 | Contains BPP unless EXPTIME (see below) equals MA. |
sub-exponential time | 2o(n) | Best-known algorithm for integer factorization; formerly-best algorithm for graph isomorphism |
exponential time | 2O(n) | Solving the traveling salesman problem using dynamic programming |
exponential time | 2poly(n) | Solving matrix chain multiplication via brute-force search |
factorial time | O(n!) | Solving the traveling salesman problem via brute-force search |
double exponential time | 22poly(n) | Deciding the truth of a given statement in Presburger arithmetic |
What are the different types of Time complexity notation used?
- Constant time – O (1)
- Linear time – O (n)
- Logarithmic time – O (log n)
- Quadratic time – O (n^2)
- Cubic time – O (n^3)
Constant time – O (1)
An algorithm is said to have constant time with order O (1) when it is not dependent on the input size n. Irrespective of the input size n, the runtime will always be the same.
Linear time – O(n)
An algorithm is said to have a linear time complexity when the running time increases linearly with the length of the input. When the function involves checking all the values in input data, such function has Time complexity with this order O(n).
Logarithmic time – O (log n)
An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step. This indicates that the number of operations is not the same as the input size. The number of operations gets reduced as the input size increases. Algorithms with Logarithmic time complexity are found in binary trees or binary search functions.
Quadratic time – O (n^2)
An algorithm is said to have a non – linear time complexity where the running time increases non-linearly (n^2) with the length of the input. Generally, nested loops come under this time complexity order where for one loop takes O(n) and if the function involves a loop within a loop, then it goes for O(n)*O(n) = O(n^2) order.
Time Complexity of Insertion Sort
The time complexity of Insertion Sort in the best case is O(n). In the worst case, the time complexity is O(n^2).
Time Complexity of Merge Sort
This sorting technique has a stable time complexity for all kinds of cases. The time complexity of Merge Sort in the best case is O(nlogn). In the worst case, the time complexity is O(nlogn). This is because Merge Sort implements a same number of sorting steps for all kinds of cases.
Time Complexity of Bubble Sort
The time complexity of Bubble Sort in the best case is O(n). In the worst case, the time complexity is O(n^2).
Time Complexity of Quick Sort
The time complexity of Quick Sort in the best case is O(nlogn). In the worst case, the time complexity is O(n^2). Quicksort is considered to be the fastest of the sorting algorithms due to its performance of O(nlogn) in best and average cases.
Time Complexity of Searching algorithms
Let us now dive into the time complexities of some Searching Algorithms and understand which of them is faster.
Time Complexity of Linear Search
Linear Search follows the sequential access. The time complexity of Linear Search in the best case is O(1). In the worst case, the time complexity is O(n).
Time Complexity of Binary Search
Binary Search is the faster of the two searching algorithms. However, for smaller arrays, linear search does a better job. The time complexity of Binary Search in the best case is O(1). In the worst case, the time complexity is O(log n).
Happy Time Complexity in DSA 🙂
Discover more from CODE t!ps
Subscribe to get the latest posts sent to your email.