Generating precise delays with timers in Embedded C - Time & Space Complexity
When using timers to create delays in embedded C, it's important to understand how the time the program takes grows as the delay length changes.
We want to know how the program's running time changes when we ask for longer or shorter delays.
Analyze the time complexity of the following code snippet.
void delay_ms(int ms) {
for (int i = 0; i < ms; i++) {
for (int j = 0; j < 1000; j++) {
// empty loop to waste time
}
}
}
This code creates a delay by running two nested loops: the outer loop runs once per millisecond, and the inner loop runs 1000 times to waste time.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The inner empty loop that runs 1000 times.
- How many times: The inner loop runs 1000 times for each iteration of the outer loop, which runs ms times.
As the delay time ms increases, the total number of loop operations grows proportionally.
| Input Size (ms) | Approx. Operations |
|---|---|
| 10 | 10 x 1000 = 10,000 |
| 100 | 100 x 1000 = 100,000 |
| 1000 | 1000 x 1000 = 1,000,000 |
Pattern observation: The total operations increase directly with the delay time requested.
Time Complexity: O(n)
This means the time the program takes grows in a straight line with the delay length you ask for.
[X] Wrong: "The delay time does not affect how long the program runs because the loops are fixed."
[OK] Correct: The outer loop runs once per millisecond, so increasing the delay increases the total loops and time proportionally.
Understanding how delay loops scale helps you reason about timing and efficiency in embedded systems, a key skill for writing reliable code.
"What if we replaced the inner loop with a hardware timer interrupt instead? How would the time complexity change?"