Challenge - 5 Problems
Kafka Error Handling Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
What is the output when a Kafka consumer encounters a deserialization error?
Consider a Kafka consumer configured with a custom deserializer that throws an exception on invalid data. What will the consumer do when it reads a corrupted message?
Kafka
try { ConsumerRecord<String, String> record = consumer.poll(Duration.ofMillis(100)).iterator().next(); String value = record.value(); // Custom deserializer may throw here System.out.println("Received: " + value); } catch (SerializationException e) { System.out.println("Deserialization error caught"); }
Attempts:
2 left
💡 Hint
Think about how exceptions in deserialization are handled inside the try-catch block.
✗ Incorrect
When the custom deserializer throws a SerializationException during poll(), the catch block catches it and prints the error message. This prevents the consumer from crashing and allows handling of corrupted messages.
❓ Predict Output
intermediate2:00remaining
What happens if a Kafka producer's send callback reports a network error?
A Kafka producer sends a message asynchronously with a callback that checks for exceptions. What will the callback print if the network is down during send?
Kafka
producer.send(new ProducerRecord<>("topic", "key", "value"), (metadata, exception) -> { if (exception != null) { System.out.println("Send failed: " + exception.getClass().getSimpleName()); } else { System.out.println("Send succeeded"); } });
Attempts:
2 left
💡 Hint
The callback receives an exception parameter when send fails.
✗ Incorrect
If the network is down, the send operation fails and the callback receives an exception such as TimeoutException. The callback prints the failure message with the exception type.
🧠 Conceptual
advanced2:00remaining
Which error handling strategy prevents message loss in Kafka consumers?
You want to ensure no messages are lost even if processing fails. Which approach is best for error handling in Kafka consumers?
Attempts:
2 left
💡 Hint
Think about when to commit offsets to avoid losing unprocessed messages.
✗ Incorrect
Committing offsets only after successful processing ensures that if processing fails, the consumer will re-read the message on restart, preventing message loss.
❓ Predict Output
advanced2:00remaining
What error does this Kafka consumer code raise?
Analyze the following Kafka consumer code snippet and identify the error it raises at runtime.
Kafka
consumer.subscribe(Collections.singletonList("topic")); while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100)); for (ConsumerRecord<String, String> record : records) { System.out.println(record.value().toUpperCase()); } consumer.commitSync(); } consumer.close();
Attempts:
2 left
💡 Hint
Consider the code flow and whether it raises a runtime exception.
✗ Incorrect
The consumer.close() line is unreachable due to the infinite while(true) loop, causing a compiler warning but no runtime error. At runtime, the code polls records, prints their values in uppercase using a for-each loop (which handles empty records safely), commits offsets with commitSync(), and continues indefinitely without throwing any exception.
🧠 Conceptual
expert3:00remaining
How to handle poison pill messages in Kafka consumers to avoid infinite processing loops?
A poison pill message causes your consumer to fail processing repeatedly. What is the best error handling approach to avoid infinite retries on such messages?
Attempts:
2 left
💡 Hint
Think about how to isolate bad messages without losing them.
✗ Incorrect
Using a dead-letter queue allows the consumer to move poison pill messages after a number of retries, preventing infinite loops while preserving the messages for later inspection.