top and htop (live monitoring) in Linux CLI - Time & Space Complexity
When using live monitoring tools like top and htop, it's important to understand how their performance changes as the system load grows.
We want to know how the cost of updating the display scales with the number of running processes.
Analyze the time complexity of the following simplified command loop from top or htop:
while true; do
ps -eo pid,comm,pcpu,pmem --sort=-pcpu | head -n 20
sleep 1
done
This loop fetches and displays the top 20 CPU-consuming processes every second.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Listing all processes and sorting them by CPU usage.
- How many times: This happens once every second, repeatedly.
As the number of running processes increases, the system must sort more items before showing the top 20.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Sorting 10 processes |
| 100 | Sorting 100 processes |
| 1000 | Sorting 1000 processes |
Pattern observation: The sorting work grows as the number of processes grows, making updates slower with more processes.
Time Complexity: O(n log n)
This means the time to update the display grows a bit faster than the number of processes because sorting takes more work as the list grows.
[X] Wrong: "The update time stays the same no matter how many processes run."
[OK] Correct: Sorting all processes takes more time as the list grows, so updates slow down with more processes.
Understanding how live monitoring tools scale helps you think about performance in real systems, a useful skill for many tech roles.
What if the tool only scanned the top 20 processes without sorting all? How would the time complexity change?