Job Scheduling with Deadlines in DSA C - Time & Space Complexity
We want to understand how the time needed to schedule jobs changes as the number of jobs grows.
How does the scheduling process scale when we add more jobs with deadlines?
Analyze the time complexity of the following code snippet.
// Job Scheduling with Deadlines
// jobs[] contains pairs of (deadline, profit)
// n is number of jobs
void jobScheduling(Job jobs[], int n) {
// Sort jobs by profit descending
qsort(jobs, n, sizeof(Job), compareProfit);
int result[n];
bool slot[n];
for (int i = 0; i < n; i++) slot[i] = false;
for (int i = 0; i < n; i++) {
for (int j = jobs[i].deadline - 1; j >= 0; j--) {
if (!slot[j]) {
slot[j] = true;
result[j] = i;
break;
}
}
}
}
This code schedules jobs to maximize profit by placing each job in the latest free slot before its deadline.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Nested loops where for each job we try to find a free slot before its deadline.
- How many times: Outer loop runs n times (for each job), inner loop can run up to n times in worst case (checking slots).
As the number of jobs increases, the number of checks to find free slots grows roughly with the square of the number of jobs.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 100 checks |
| 100 | About 10,000 checks |
| 1000 | About 1,000,000 checks |
Pattern observation: The operations grow roughly by the square of the input size.
Time Complexity: O(n2)
This means if you double the number of jobs, the time to schedule them roughly quadruples.
[X] Wrong: "The scheduling always takes linear time because we just loop through jobs once."
[OK] Correct: Each job may require checking multiple slots before finding a free one, causing nested loops and more work as jobs increase.
Understanding how nested loops affect time helps you explain and improve scheduling algorithms clearly in real situations.
"What if we used a data structure to find free slots faster? How would the time complexity change?"