DMA with ADC for continuous sampling in Embedded C - Time & Space Complexity
We want to understand how the time cost changes when using DMA with ADC for continuous sampling.
Specifically, how often the CPU is involved as the number of samples grows.
Analyze the time complexity of the following code snippet.
// Setup ADC and DMA for continuous sampling
void setup_adc_dma(uint16_t *buffer, uint32_t length) {
ADC_Init();
DMA_Init(buffer, length);
ADC_Start();
}
// DMA interrupt handler called after buffer is filled
void DMA_IRQHandler() {
process_data(buffer, length);
DMA_Restart();
}
This code sets up ADC to sample data continuously using DMA to transfer samples to memory without CPU load during sampling.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: DMA transfers samples in the background without CPU intervention.
- How many times: DMA moves each sample once; CPU processes data only after buffer fills.
As the number of samples increases, DMA handles transfers automatically, so CPU work happens only per buffer full.
| Input Size (samples) | CPU Processing Calls |
|---|---|
| 10 | 1 (after 10 samples) |
| 100 | 1 (after 100 samples) |
| 1000 | 1 (after 1000 samples) |
Pattern observation: CPU processing happens once per buffer fill, not per sample, so CPU load grows slowly with input size.
Time Complexity: O(n/b)
This means CPU work grows linearly with the number of buffer fills, where n is total samples and b is buffer size.
[X] Wrong: "CPU processes every sample individually as it arrives."
[OK] Correct: DMA transfers samples in the background, so CPU only processes data after a full buffer, reducing CPU load.
Understanding how DMA offloads CPU work is a useful skill for embedded programming and shows you can think about efficient data handling.
"What if the buffer size is doubled? How would the CPU processing frequency and time complexity change?"