How embedded C differs from desktop C - Performance & Efficiency
We want to understand how the time it takes to run embedded C programs changes compared to desktop C programs.
What makes embedded C behave differently in terms of speed and operations?
Analyze the time complexity of the following embedded C code snippet.
void delay(int count) {
volatile int i;
for(i = 0; i < count; i++) {
// do nothing, just wait
}
}
int main() {
delay(1000);
return 0;
}
This code creates a simple delay loop common in embedded C to wait for some time.
Look at what repeats in this code.
- Primary operation: The for-loop that counts from 0 to count.
- How many times: Exactly 'count' times, which is 1000 in this example.
The time the program waits grows directly with the number given to delay.
| Input Size (count) | Approx. Operations |
|---|---|
| 10 | 10 loop steps |
| 100 | 100 loop steps |
| 1000 | 1000 loop steps |
Pattern observation: If you double the count, the loop runs twice as long.
Time Complexity: O(n)
This means the time grows in a straight line with the input size.
[X] Wrong: "Embedded C runs faster than desktop C always, so time complexity doesn't matter."
[OK] Correct: Even if embedded C runs on simpler hardware, the number of repeated steps still grows with input size, so time complexity still matters.
Understanding how embedded C loops and delays scale helps you explain how your code will behave on small devices, a useful skill in many programming roles.
"What if we replaced the delay loop with a hardware timer interrupt? How would the time complexity change?"