Sort a Linked List Using Merge Sort in DSA C - Time & Space Complexity
We want to understand how the time needed to sort a linked list grows as the list gets bigger.
Specifically, we ask: How does merge sort handle linked lists in terms of speed?
Analyze the time complexity of the following code snippet.
// Merge sort for linked list
Node* mergeSort(Node* head) {
if (!head || !head->next) return head;
Node* mid = getMiddle(head);
Node* left = head;
Node* right = mid->next;
mid->next = NULL;
left = mergeSort(left);
right = mergeSort(right);
return merge(left, right);
}
This code splits the list into halves, sorts each half recursively, and merges them back.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Recursively splitting the list and merging sorted halves.
- How many times: The list is split about log n times, and merging touches all nodes each time.
Each split divides the list in half, and merging processes all nodes once per level.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 * 4 = 40 |
| 100 | About 100 * 7 = 700 |
| 1000 | About 1000 * 10 = 10,000 |
Pattern observation: Operations grow roughly as n times log n, meaning doubling input size increases work a bit more than double.
Time Complexity: O(n log n)
This means sorting takes longer as the list grows, but the increase is controlled and efficient for large lists.
[X] Wrong: "Merge sort on linked list is as slow as bubble sort, so it's O(n²)."
[OK] Correct: Merge sort splits the list and merges efficiently, avoiding repeated comparisons like bubble sort, so it runs much faster.
Understanding merge sort on linked lists shows you can handle recursion and pointers well, skills that are useful in many coding challenges.
"What if we used quicksort instead of merge sort on the linked list? How would the time complexity change?"
