0
0
Kafkadevops~20 mins

Why delivery guarantees affect correctness in Kafka - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Kafka Delivery Guarantees Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding At-Least-Once Delivery Impact

In Kafka, what is a key correctness concern when using at-least-once delivery guarantee?

AMessages are delivered in random order without any guarantee.
BMessages can be delivered more than once, causing duplicates.
CMessages are always delivered exactly once with no duplicates.
DMessages might be lost and never delivered to consumers.
Attempts:
2 left
💡 Hint

Think about what happens if a message is retried after a failure.

Predict Output
intermediate
2:00remaining
Output with Exactly-Once Semantics Enabled

What will be the output count of messages processed if Kafka producer uses exactly-once semantics and the consumer commits offsets only after processing?

Assume 3 messages are sent and no failures occur.

Kafka
producer.send('topic', 'msg1')
producer.send('topic', 'msg2')
producer.send('topic', 'msg3')
consumer.process_and_commit_offsets()
A3 messages processed but some duplicates may occur.
BLess than 3 messages processed due to message loss.
CMore than 3 messages processed due to retries.
D3 messages processed exactly once.
Attempts:
2 left
💡 Hint

Exactly-once semantics prevent duplicates and loss when configured properly.

Predict Output
advanced
2:00remaining
Effect of At-Most-Once Delivery on Message Loss

Given the following Kafka consumer code snippet with at-most-once delivery, what is the possible output?

consumer.poll()
consumer.commit_offsets()
process_messages()

Assume a failure happens after commit but before processing.

ASome messages may be lost and never processed.
BAll messages are processed with no loss.
CMessages are processed multiple times causing duplicates.
DConsumer crashes without committing offsets.
Attempts:
2 left
💡 Hint

Consider what happens if offsets are committed before processing.

🧠 Conceptual
advanced
2:00remaining
Why Exactly-Once Semantics Are Hard to Achieve

Which of the following best explains why achieving exactly-once delivery in Kafka is challenging?

ABecause Kafka does not support message ordering at all.
BBecause Kafka automatically deletes messages after one delivery.
CBecause network failures and retries can cause duplicates or losses without careful coordination.
DBecause consumers cannot commit offsets manually.
Attempts:
2 left
💡 Hint

Think about what happens when failures happen during message send or processing.

🚀 Application
expert
3:00remaining
Designing a Consumer to Handle At-Least-Once Delivery Correctly

You have a Kafka consumer configured with at-least-once delivery. Which approach below best ensures your application maintains correctness despite possible duplicate messages?

AUse idempotent processing logic or deduplication mechanisms to handle repeated messages safely.
BCommit offsets before processing messages to avoid duplicates.
CProcess messages normally without any duplicate checks, relying on Kafka to avoid duplicates.
DIgnore offset commits and process messages multiple times.
Attempts:
2 left
💡 Hint

Think about how to handle duplicates when they happen.