Cross-compilation mental model in Embedded C - Time & Space Complexity
When we cross-compile code, we translate it from one machine type to another before running it.
We want to understand how the time to compile grows as the code size grows.
Analyze the time complexity of the following cross-compilation process code snippet.
// Simplified cross-compiler loop over source files
for (int i = 0; i < num_files; i++) {
parse_source(files[i]);
optimize_code(files[i]);
generate_target_code(files[i]);
}
This code loops over each source file to parse, optimize, and generate code for the target machine.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping over each source file to process it.
- How many times: Exactly once per source file, so
num_filestimes.
As the number of source files increases, the total work grows in a straight line.
| Input Size (num_files) | Approx. Operations |
|---|---|
| 10 | About 10 times the work of one file |
| 100 | About 100 times the work of one file |
| 1000 | About 1000 times the work of one file |
Pattern observation: The work grows directly with the number of files, no surprises.
Time Complexity: O(n)
This means if you double the number of source files, the compile time roughly doubles too.
[X] Wrong: "Cross-compilation time grows exponentially with the number of files because of complex target code generation."
[OK] Correct: Each file is processed independently in a simple loop, so time grows linearly, not exponentially.
Understanding how compilation time scales helps you explain performance in embedded projects clearly and confidently.
"What if the compiler had to compare every file with every other file during optimization? How would the time complexity change?"