Why interrupts are needed in Embedded C - Performance Analysis
We want to understand how the use of interrupts affects the time a program spends waiting or checking for events.
How does using interrupts change the way the program spends its time compared to checking repeatedly?
Analyze the time complexity of the following code snippet.
// Polling example without interrupts
while(1) {
if (button_pressed()) {
handle_button();
}
}
This code keeps checking if a button is pressed in a continuous loop, handling it when detected.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The infinite loop that checks the button state repeatedly.
- How many times: This check happens continuously, as fast as the processor can run.
Explain the growth pattern intuitively.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 (events) | Many checks, mostly waiting, until 10 events happen |
| 100 (events) | Many more checks, still mostly waiting |
| 1000 (events) | Even more checks, time spent mostly on waiting |
Pattern observation: The program spends a lot of time checking even when nothing happens, so the work grows with time, not just events.
Time Complexity: O(n)
This means the program spends time checking repeatedly, growing linearly with the number of checks, even if events are rare.
[X] Wrong: "Checking repeatedly doesn't cost much time because the processor is fast."
[OK] Correct: Even if fast, the processor wastes time checking instead of doing other useful work or sleeping.
Understanding how interrupts reduce wasted time helps you explain efficient program design, a key skill in embedded systems.
"What if we replaced the polling loop with an interrupt-driven approach? How would the time complexity change?"