Why dynamic memory is risky in embedded in Embedded C - Performance Analysis
We want to understand how using dynamic memory affects program speed in embedded systems.
Specifically, how does allocating and freeing memory during runtime impact execution time?
Analyze the time complexity of this dynamic memory usage snippet.
#include <stdlib.h>
void process_data(int n) {
int* data = (int*)malloc(n * sizeof(int));
if (data == NULL) return;
for (int i = 0; i < n; i++) {
data[i] = i * 2;
}
free(data);
}
This code allocates memory for n integers, fills them, then frees the memory.
Look at what repeats and costs time here.
- Primary operation: Loop filling the array with n steps.
- How many times: Exactly n times, once per element.
- Other operations: malloc and free happen once each but can be costly internally.
As n grows, the loop runs more times, and memory allocation work can also increase.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 loop steps + 1 malloc + 1 free |
| 100 | About 100 loop steps + 1 malloc + 1 free |
| 1000 | About 1000 loop steps + 1 malloc + 1 free |
Pattern observation: The loop grows linearly with n, but malloc/free cost can vary and sometimes be unpredictable.
Time Complexity: O(n)
This means the time to run grows roughly in direct proportion to the input size n.
[X] Wrong: "Dynamic memory allocation always takes constant time and is safe in embedded systems."
[OK] Correct: Allocation and freeing can take variable time and cause fragmentation, leading to unpredictable delays and failures in embedded devices.
Understanding how dynamic memory affects time helps you write reliable embedded code and explain your design choices clearly.
What if we replaced dynamic memory allocation with static arrays? How would the time complexity and risks change?