In event-driven systems like Kafka, what is the main reason they scale better than traditional request-response systems?
Think about how events flow and how consumers handle them in parallel.
Event-driven systems decouple producers and consumers. Consumers can process events independently and in parallel, which improves scalability.
What will be the output of this simplified Kafka consumer code snippet that processes events asynchronously?
import asyncio async def process_event(event): await asyncio.sleep(0.1) print(f"Processed {event}") async def main(): events = ['e1', 'e2', 'e3'] tasks = [process_event(e) for e in events] await asyncio.gather(*tasks) asyncio.run(main())
Consider how async tasks run concurrently and the order of print statements.
The events are processed concurrently, so the print order may vary. The code runs without errors.
What is the main scalability problem in this Kafka consumer code snippet?
from kafka import KafkaConsumer consumer = KafkaConsumer('topic') for message in consumer: print(f"Processing {message.value}") # Simulate long processing import time time.sleep(5)
Think about how the sleep affects message processing speed.
Sleeping inside the loop blocks the consumer from processing other messages, reducing scalability.
Choose the correct Kafka producer code that sends messages asynchronously and handles delivery reports.
Look for correct usage of bytes and callback for async send.
Option D sends bytes and adds a callback for success, which is the correct async pattern.
In an event-driven Kafka system, how is a sudden spike in incoming events best handled to maintain scalability?
Think about how Kafka partitions and consumers work together to scale.
Kafka partitions allow events to be split across consumers, so adding consumers helps handle spikes by parallel processing.