Endianness (big-endian vs little-endian) in Embedded C - Performance Comparison
When working with endianness, we want to know how the time to convert or check byte order changes as data size grows.
We ask: How does the work grow when handling bigger numbers or more bytes?
Analyze the time complexity of the following code snippet.
unsigned int swap_endian(unsigned int val) {
return ((val >> 24) & 0x000000FF) |
((val >> 8) & 0x000000FF) |
((val << 8) & 0x0000FF00) |
((val << 24) & 0xFF000000);
}
This code swaps the byte order of a 4-byte integer to convert between big-endian and little-endian.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Bit shifting and masking on each byte of the 4-byte integer.
- How many times: Fixed 4 times, once per byte.
Each byte requires a fixed set of operations, so as the number of bytes grows, the work grows proportionally.
| Input Size (bytes) | Approx. Operations |
|---|---|
| 4 | 4 sets of shifts and masks |
| 8 | 8 sets of shifts and masks |
| 16 | 16 sets of shifts and masks |
Pattern observation: The work grows linearly with the number of bytes processed.
Time Complexity: O(n)
This means the time to swap bytes grows in direct proportion to the number of bytes in the data.
[X] Wrong: "Swapping endianness is always a constant time operation regardless of data size."
[OK] Correct: For larger data types or arrays, each byte must be processed, so time grows with data size.
Understanding how byte order conversion scales helps you write efficient embedded code and shows you can think about performance in low-level tasks.
"What if we needed to swap endianness for an array of integers instead of a single integer? How would the time complexity change?"