Count Inversions in Array in DSA Python - Time & Space Complexity
We want to understand how long it takes to count inversions in an array as the array grows.
Specifically, how does the work increase when the array size gets bigger?
Analyze the time complexity of the following code snippet.
def merge_sort_count(arr):
if len(arr) <= 1:
return arr, 0
mid = len(arr) // 2
left, left_inv = merge_sort_count(arr[:mid])
right, right_inv = merge_sort_count(arr[mid:])
merged, split_inv = merge_count(left, right)
return merged, left_inv + right_inv + split_inv
# merge_count merges two sorted arrays and counts split inversions
This code counts how many pairs of elements are out of order by using a merge sort approach.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Recursive splitting and merging of the array.
- How many times: The array is split into halves repeatedly until size 1, then merged back.
Each split divides the array in half, and merging compares elements to count inversions.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 50 comparisons and merges |
| 100 | About 700 comparisons and merges |
| 1000 | About 10,000 comparisons and merges |
Pattern observation: The work grows roughly by n times log n.
Time Complexity: O(n log n)
This means the time to count inversions grows a little more than linearly, because we split and merge repeatedly.
[X] Wrong: "Counting inversions must be O(n²) because we compare every pair."
[OK] Correct: The merge sort method cleverly counts inversions during merging, avoiding checking all pairs directly.
Understanding this method shows you can improve naive solutions by using smart divide-and-conquer techniques.
"What if we used a simple nested loop to count inversions? How would the time complexity change?"