Why DMA is needed in Embedded C - Performance Analysis
We want to understand how using DMA affects the time it takes to move data in embedded systems.
How does DMA change the work the CPU does when transferring data?
Analyze the time complexity of CPU-driven data transfer versus DMA-driven transfer.
// CPU-driven data copy
for (int i = 0; i < n; i++) {
dest[i] = src[i];
}
// DMA-driven data copy
// CPU starts DMA and continues other work
// DMA controller copies data independently
This code shows a simple loop where the CPU copies data byte by byte, compared to using DMA which handles copying without CPU involvement.
Look at what repeats during data transfer.
- Primary operation: Copying each data element from source to destination.
- How many times: Exactly n times, where n is the data size.
As the data size grows, the CPU copy loop runs more times, taking more CPU time.
| Input Size (n) | Approx. CPU Copy Operations |
|---|---|
| 10 | 10 |
| 100 | 100 |
| 1000 | 1000 |
Pattern observation: CPU work grows directly with data size, so bigger data means more CPU time spent copying.
Time Complexity: O(n)
This means the CPU time to copy data grows linearly with the amount of data.
[X] Wrong: "Using DMA makes data transfer instant and cost-free for the CPU."
[OK] Correct: DMA still takes time to move data, but it frees the CPU to do other tasks during that time.
Understanding how DMA changes CPU workload helps you explain efficient embedded system design clearly and confidently.
What if the CPU had to copy data in smaller chunks repeatedly instead of one big loop? How would that affect time complexity?