ADC interrupt-driven reading in Embedded C - Time & Space Complexity
We want to understand how the time cost changes when reading analog data using interrupts.
How does the program's work grow as more ADC readings happen?
Analyze the time complexity of the following code snippet.
volatile int adc_value = 0;
void ADC_IRQHandler(void) {
adc_value = ADC_ReadData();
ADC_ClearInterruptFlag();
}
int main(void) {
ADC_EnableInterrupt();
while(1) {
// main loop does other work
}
}
This code reads ADC values using an interrupt handler that runs when a conversion finishes.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The ADC interrupt handler runs each time a conversion completes.
- How many times: It runs once per ADC conversion, which depends on how often the ADC triggers.
Each ADC conversion triggers the interrupt once, so the work grows directly with the number of conversions.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 interrupt calls |
| 100 | 100 interrupt calls |
| 1000 | 1000 interrupt calls |
Pattern observation: The total work increases in a straight line as the number of ADC readings increases.
Time Complexity: O(n)
This means the time spent grows directly with the number of ADC readings taken.
[X] Wrong: "The interrupt handler runs only once, so time cost is constant."
[OK] Correct: The interrupt runs every time the ADC finishes a conversion, so the total time grows with the number of conversions.
Understanding how interrupt-driven code scales helps you write efficient embedded programs that handle real-time data smoothly.
"What if we changed the ADC to trigger interrupts only every 10th conversion? How would the time complexity change?"