Why threads enable concurrent execution in Operating Systems - Performance Analysis
We want to understand how using threads affects the time it takes to run tasks.
Specifically, how does splitting work into threads change the total execution time as tasks grow?
Analyze the time complexity of this simple threaded task execution.
for each task in tasks:
create a thread to run task
wait for all threads to finish
This code runs multiple tasks by creating a thread for each, then waits for all to complete.
Look at what repeats and takes time:
- Primary operation: Creating and running a thread for each task
- How many times: Once per task, so as many times as there are tasks
When tasks increase, threads run many tasks at the same time.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 threads running mostly together |
| 100 | About 100 threads running mostly together |
| 1000 | About 1000 threads running mostly together |
Pattern observation: The total time depends on the longest task, not the number of tasks, because threads run tasks concurrently.
Time Complexity: O(t)
This means the total time grows with the longest single task time, not the total number of tasks.
[X] Wrong: "More tasks always mean more total time because each task adds time."
[OK] Correct: Threads let tasks run at the same time, so total time depends on the longest task, not the count.
Understanding how threads let tasks run together helps you explain real-world programs that do many things at once efficiently.
"What if tasks have to share a resource and wait for each other? How would that affect the time complexity?"