Shell Sort Algorithm in DSA C++ - Time & Space Complexity
We want to understand how the time taken by Shell Sort changes as the list size grows.
How does the number of steps increase when sorting bigger lists?
Analyze the time complexity of the following Shell Sort code.
void shellSort(int arr[], int n) {
for (int gap = n/2; gap > 0; gap /= 2) {
for (int i = gap; i < n; i++) {
int temp = arr[i];
int j = i;
while (j >= gap && arr[j - gap] > temp) {
arr[j] = arr[j - gap];
j -= gap;
}
arr[j] = temp;
}
}
}
This code sorts an array by comparing elements far apart and gradually reducing the gap.
Look at the loops and repeated comparisons.
- Primary operation: The inner while loop that shifts elements to insert the current element.
- How many times: It runs multiple times for each gap size, and the gap sizes reduce by about half each time.
As the list gets bigger, the number of comparisons and shifts grows, but not as fast as simple sorting methods.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 20 to 30 operations |
| 100 | Several hundred operations |
| 1000 | Thousands of operations, but less than n squared |
Pattern observation: The operations grow faster than linear but slower than quadratic, improving efficiency over simple sorts.
Time Complexity: O(n^{1.5})
This means the time grows faster than a straight line but much slower than the square of the input size.
[X] Wrong: "Shell Sort always runs in quadratic time like Bubble Sort."
[OK] Correct: Shell Sort uses gaps to reduce the number of comparisons, so it usually runs faster than simple quadratic sorts.
Understanding Shell Sort's time complexity helps you explain how improving sorting steps can speed up algorithms, a useful skill in interviews.
"What if we changed the gap sequence to use a different pattern, like 3x+1 increments? How would the time complexity change?"