Event sourcing pattern in Kafka - Time & Space Complexity
When using event sourcing with Kafka, we want to understand how the time to process events grows as more events happen.
We ask: How does the number of events affect the work Kafka does to rebuild state?
Analyze the time complexity of the following Kafka event sourcing snippet.
// Consume events from a Kafka topic
consumer.subscribe(['user-events'])
while (true) {
const records = consumer.poll(1000)
for (const record of records) {
// Apply event to rebuild user state
userState.apply(record.value)
}
}
This code reads events from Kafka and applies each event to update the user state.
Look for repeated work in the code.
- Primary operation: Loop over all events received from Kafka.
- How many times: Once per event in the topic, continuously as new events arrive.
As the number of events grows, the time to process all events grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 event applications |
| 100 | 100 event applications |
| 1000 | 1000 event applications |
Pattern observation: The work grows directly with the number of events.
Time Complexity: O(n)
This means the time to rebuild or update state grows linearly with the number of events.
[X] Wrong: "Processing events is always constant time no matter how many events there are."
[OK] Correct: Each event must be applied, so more events mean more work, not the same amount.
Understanding how event processing time grows helps you explain system behavior clearly and shows you can reason about real-world data flow.
"What if we stored snapshots periodically to avoid replaying all events? How would that change the time complexity?"