Event streaming concept in Kafka - Time & Space Complexity
When working with event streaming in Kafka, it's important to understand how processing time changes as more events flow through the system.
We want to know how the time to handle events grows as the number of events increases.
Analyze the time complexity of the following Kafka consumer processing events.
consumer.subscribe(Collections.singletonList("topic"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
process(record.value());
}
}
This code listens to a Kafka topic and processes each event as it arrives in batches.
Look at what repeats as events come in.
- Primary operation: Loop over each event in the batch to process it.
- How many times: Once for every event received in each poll cycle.
As the number of events increases, the processing time grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 process calls |
| 100 | 100 process calls |
| 1000 | 1000 process calls |
Pattern observation: Doubling the number of events roughly doubles the work done.
Time Complexity: O(n)
This means the time to process events grows linearly with the number of events.
[X] Wrong: "Processing one event takes the same time no matter how many events arrive."
[OK] Correct: Actually, total processing time depends on how many events you handle, so more events mean more total work.
Understanding how event processing time grows helps you design systems that handle data smoothly as load changes. This skill shows you can think about real-world data flow and performance.
"What if we batch process events in groups of 100 instead of one by one? How would the time complexity change?"