ADC conversion process (sample and hold) in Embedded C - Time & Space Complexity
We want to understand how the time taken by an ADC conversion grows as we process more samples.
How does the number of samples affect the total conversion time?
Analyze the time complexity of the following code snippet.
void adc_conversion(int *input_samples, int *output_values, int n) {
for (int i = 0; i < n; i++) {
// Sample and hold
int sample = input_samples[i];
// Convert sample to digital value
output_values[i] = convert_adc(sample);
}
}
int convert_adc(int sample) {
// Simulate ADC conversion delay
// ... conversion steps ...
return sample; // simplified
}
This code takes an array of analog samples, converts each one to a digital value using ADC, and stores the results.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The for-loop that processes each sample one by one.
- How many times: Exactly n times, where n is the number of input samples.
Each sample requires a fixed amount of time to convert. So, total time grows directly with the number of samples.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 conversions |
| 100 | 100 conversions |
| 1000 | 1000 conversions |
Pattern observation: Doubling the number of samples doubles the total conversion time.
Time Complexity: O(n)
This means the total time grows in a straight line with the number of samples processed.
[X] Wrong: "The ADC conversion time stays the same no matter how many samples we have."
[OK] Correct: Each sample requires its own conversion time, so more samples mean more total time.
Understanding how processing time grows with input size helps you design efficient embedded systems and explain your reasoning clearly in interviews.
"What if the ADC conversion function was optimized to handle multiple samples at once? How would the time complexity change?"