Floating point cost on embedded systems in Embedded C - Time & Space Complexity
We want to understand how using floating point math affects the time it takes for embedded programs to run.
How does the cost of floating point operations grow as we do more calculations?
Analyze the time complexity of the following code snippet.
float sum = 0.0f;
for (int i = 0; i < n; i++) {
float val = (float)i * 0.5f;
sum += val;
}
return sum;
This code adds up floating point values in a loop from 0 to n-1.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Floating point multiplication and addition inside the loop.
- How many times: Exactly n times, once per loop iteration.
Each time n grows, the number of floating point operations grows the same amount.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 multiplications and 10 additions |
| 100 | About 100 multiplications and 100 additions |
| 1000 | About 1000 multiplications and 1000 additions |
Pattern observation: The work grows directly with n, so doubling n doubles the work.
Time Complexity: O(n)
This means the time to run grows in a straight line with the number of loop steps.
[X] Wrong: "Floating point operations take the same time as integer operations, so complexity is unaffected."
[OK] Correct: Floating point math often takes more time on embedded systems without hardware support, so each operation can be slower even if the count is the same.
Knowing how floating point math affects time helps you write efficient embedded code and explain your choices clearly in interviews.
"What if we replaced floating point math with fixed-point integer math? How would the time complexity and cost change?"