Sort a Linked List Using Merge Sort in DSA Python - Time & Space Complexity
We want to understand how the time needed to sort a linked list grows as the list gets bigger.
How does the sorting time change when the list size increases?
Analyze the time complexity of the following code snippet.
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def merge_sort(head):
if not head or not head.next:
return head
# Find middle
slow, fast = head, head.next
while fast and fast.next:
slow = slow.next
fast = fast.next.next
mid = slow.next
slow.next = None
left = merge_sort(head)
right = merge_sort(mid)
return merge(left, right)
# merge function merges two sorted lists
This code splits the linked list into halves recursively and merges sorted halves back.
- Primary operation: Recursively splitting the list and merging sorted halves.
- How many times: The list is split about log n times, and merging happens n times total.
Each split divides the list in half, and merging takes time proportional to the list size.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 40 operations |
| 100 | About 700 operations |
| 1000 | About 10,000 operations |
Pattern observation: Operations grow a bit faster than n but much slower than n squared, roughly n times log n.
Time Complexity: O(n log n)
This means the time to sort grows a little more than linearly as the list gets bigger, which is efficient for sorting.
[X] Wrong: "Since we split the list, the time should be just O(log n)."
[OK] Correct: Splitting happens log n times, but merging takes time proportional to n each time, so total time is n times log n.
Understanding merge sort on linked lists shows you can handle recursion and linked data well, a useful skill in many coding challenges.
"What if we used quicksort instead of merge sort on the linked list? How would the time complexity change?"