Hierarchical state machine concept in Embedded C - Time & Space Complexity
We want to understand how the time to run a hierarchical state machine changes as the number of states grows.
How does adding more states affect the work the machine does each time it processes an event?
Analyze the time complexity of the following code snippet.
// Simple hierarchical state machine event handler
void handle_event(State *current, Event e) {
while (current != NULL) {
if (current->handler(e)) {
break; // event handled
}
current = current->parent; // move up hierarchy
}
}
This code tries to handle an event starting from the current state and moves up through parent states until handled.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The while loop that moves up the state hierarchy calling handlers.
- How many times: At most once per level in the state hierarchy, until the event is handled or root is reached.
As the number of nested states increases, the number of handler calls can increase linearly.
| Input Size (n = hierarchy depth) | Approx. Operations (handler calls) |
|---|---|
| 3 | Up to 3 |
| 10 | Up to 10 |
| 100 | Up to 100 |
Pattern observation: The work grows directly with the depth of the state hierarchy.
Time Complexity: O(n)
This means the time to handle an event grows linearly with the number of nested states.
[X] Wrong: "The event handling time stays the same no matter how many states there are."
[OK] Correct: Because the code may need to check each parent state until the event is handled, so more states can mean more checks.
Understanding how event handling scales in hierarchical state machines helps you design efficient embedded systems and shows you can reason about real-world code performance.
"What if the event is always handled at the current state without moving up? How would the time complexity change?"