0
0
Data-structures-theoryComparisonBeginner · 4 min read

O(1) vs O(n) vs O(log n) vs O(n^2): Key Differences and Usage

The O(1) time complexity means an operation takes constant time regardless of input size. O(n) means the time grows linearly with input size, O(log n) grows logarithmically (much slower), and O(n^2) grows quadratically, becoming very slow for large inputs.
⚖️

Quick Comparison

Here is a quick overview of how these time complexities compare in terms of speed, scalability, and example operations.

Time ComplexityGrowth RateSpeedExample OperationScalability
O(1)ConstantFastestAccessing an array element by indexExcellent
O(log n)LogarithmicVery fastBinary search in a sorted arrayVery good
O(n)LinearModerateFinding an item in an unsorted listFair
O(n^2)QuadraticSlowNested loops over the same dataPoor for large n
⚖️

Key Differences

O(1) means the operation takes the same amount of time no matter how big the input is. For example, getting the first item in a list is always quick.

O(n) means the time grows directly with the input size. If you have twice as many items, it takes twice as long. Searching an unsorted list is like this because you might check every item.

O(log n) grows much slower than O(n). It means each step cuts the problem size roughly in half, like in binary search. This makes it very efficient for large data.

O(n^2) means the time grows with the square of the input size. This happens with nested loops, where for each item you do a full pass over all items again. It becomes very slow as data grows.

⚖️

Code Comparison

Here is an example of O(n) time complexity: searching for a number in an unsorted list by checking each element.

python
def linear_search(arr, target):
    for i, value in enumerate(arr):
        if value == target:
            return i
    return -1

numbers = [5, 3, 8, 4, 2]
index = linear_search(numbers, 4)
print(index)
Output
3
↔️

O(log n) Equivalent

Here is an example of O(log n) time complexity: binary search on a sorted list, which repeatedly halves the search space.

python
def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

sorted_numbers = [2, 3, 4, 5, 8]
index = binary_search(sorted_numbers, 4)
print(index)
Output
2
🎯

When to Use Which

Choose O(1) operations whenever possible for the fastest performance, such as direct access in arrays or hash tables.

Use O(log n) algorithms like binary search when working with sorted data to get efficient lookups.

O(n) is acceptable for simple scans or when data is unsorted and small.

Avoid O(n^2) algorithms for large data sets because they become very slow; use them only for small inputs or when no better option exists.

Key Takeaways

O(1) is the fastest and constant time regardless of input size.
O(log n) grows slowly and is efficient for large sorted data.
O(n) grows linearly and is simple but slower for big data.
O(n^2) grows very fast and should be avoided for large inputs.
Choose algorithms based on input size and data structure for best performance.