Why event-driven scales applications in Kafka - Performance Analysis
We want to see how event-driven systems handle more work as they grow.
How does the number of events affect the work done?
Analyze the time complexity of the following Kafka event processing snippet.
consumer.subscribe(List.of("orders"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
processOrder(record.value());
}
}
This code listens for new order events and processes each one as it arrives.
Look at what repeats as more events come in.
- Primary operation: Loop over all received events to process them.
- How many times: Once for each event batch, and inside that, once per event.
More events mean more processing steps.
| Input Size (events) | Approx. Operations |
|---|---|
| 10 | About 10 process calls |
| 100 | About 100 process calls |
| 1000 | About 1000 process calls |
Pattern observation: The work grows directly with the number of events.
Time Complexity: O(n)
This means the time to handle events grows in a straight line with how many events come in.
[X] Wrong: "Event-driven systems process all events instantly, so time doesn't grow with more events."
[OK] Correct: Each event still needs processing time, so more events mean more total work.
Understanding how event-driven systems scale helps you explain real-world software that handles many users or messages smoothly.
"What if we batch process events instead of one by one? How would the time complexity change?"