Priority scheduling in Operating Systems - Time & Space Complexity
When using priority scheduling in operating systems, it's important to understand how the time to schedule processes grows as more processes arrive.
We want to know how the scheduling time changes when the number of processes increases.
Analyze the time complexity of the following priority scheduling code snippet.
while process_list is not empty:
find process with highest priority
run that process
remove it from process_list
This code selects and runs the highest priority process repeatedly until all are done.
Look at what repeats in this scheduling approach.
- Primary operation: Finding the highest priority process in the list.
- How many times: Once for each process, so as many times as there are processes.
As the number of processes increases, the time to find the highest priority process grows because it checks all remaining processes each time.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 55 checks |
| 100 | About 5,050 checks |
| 1000 | About 500,500 checks |
Pattern observation: The number of operations grows much faster than the number of processes, roughly like the square of the input size.
Time Complexity: O(n²)
This means that if you double the number of processes, the scheduling time roughly quadruples.
[X] Wrong: "Finding the highest priority process each time only takes the same time regardless of how many processes are left."
[OK] Correct: Actually, each search looks through all remaining processes, so as the list shrinks, it still takes time proportional to how many are left, adding up to a lot overall.
Understanding how scheduling time grows helps you explain how operating systems handle many tasks efficiently, a useful skill in system design discussions.
What if we used a priority queue data structure instead of searching the list each time? How would the time complexity change?