0
0
Kafkadevops~20 mins

Why event-driven scales applications in Kafka - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Event-Driven Scalability Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why does event-driven architecture improve scalability?

In event-driven systems like Kafka, what is the main reason they scale better than traditional request-response systems?

ABecause event-driven systems use a single thread to handle all events, reducing complexity.
BBecause events are always processed in a strict order, preventing any parallelism.
CBecause events are processed asynchronously, allowing many consumers to work in parallel without waiting.
DBecause events are stored in a database and processed only when requested synchronously.
Attempts:
2 left
💡 Hint

Think about how events flow and how consumers handle them in parallel.

Predict Output
intermediate
2:00remaining
Output of Kafka consumer processing events asynchronously

What will be the output of this simplified Kafka consumer code snippet that processes events asynchronously?

Kafka
import asyncio

async def process_event(event):
    await asyncio.sleep(0.1)
    print(f"Processed {event}")

async def main():
    events = ['e1', 'e2', 'e3']
    tasks = [process_event(e) for e in events]
    await asyncio.gather(*tasks)

asyncio.run(main())
AProcessed e1\nProcessed e2\nProcessed e3 (always in this order)
BProcessed e1\nProcessed e2\nProcessed e3 (order may vary due to async)
CSyntaxError due to missing await in list comprehension
DRuntimeError because asyncio.gather cannot handle multiple tasks
Attempts:
2 left
💡 Hint

Consider how async tasks run concurrently and the order of print statements.

🔧 Debug
advanced
2:00remaining
Identify the scalability issue in this Kafka consumer code

What is the main scalability problem in this Kafka consumer code snippet?

Kafka
from kafka import KafkaConsumer

consumer = KafkaConsumer('topic')
for message in consumer:
    print(f"Processing {message.value}")
    # Simulate long processing
    import time
    time.sleep(5)
AThe consumer processes messages one by one and blocks, limiting throughput.
BThe consumer does not commit offsets, causing duplicate processing.
CThe consumer uses too many threads, causing overhead.
DThe consumer subscribes to the wrong topic, so no messages are processed.
Attempts:
2 left
💡 Hint

Think about how the sleep affects message processing speed.

📝 Syntax
advanced
2:00remaining
Which Kafka producer code snippet correctly sends messages asynchronously?

Choose the correct Kafka producer code that sends messages asynchronously and handles delivery reports.

A
from kafka import KafkaProducer
producer = KafkaProducer()
producer.send('topic', b'message').add_errback(lambda err: print('Error'))
B
from kafka import KafkaProducer
producer = KafkaProducer()
producer.send('topic', b'message').get()
C
from kafka import KafkaProducer
producer = KafkaProducer()
producer.send('topic', 'message')
D
from kafka import KafkaProducer
producer = KafkaProducer()
producer.send('topic', b'message').add_callback(lambda rec: print('Sent'))
Attempts:
2 left
💡 Hint

Look for correct usage of bytes and callback for async send.

🚀 Application
expert
3:00remaining
How does event-driven architecture handle load spikes in Kafka?

In an event-driven Kafka system, how is a sudden spike in incoming events best handled to maintain scalability?

ABy adding more consumer instances to process events in parallel and using Kafka partitions to distribute load.
BBy slowing down producers to reduce event generation rate during spikes.
CBy processing events synchronously in a single consumer to avoid race conditions.
DBy storing all events in a single partition to ensure order and avoid duplication.
Attempts:
2 left
💡 Hint

Think about how Kafka partitions and consumers work together to scale.