Challenge - 5 Problems
Kafka Advanced Flow Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
What is the output of this Kafka Streams code snippet?
Consider this Kafka Streams Java code that processes a stream of user clicks and counts clicks per user in a 1-minute window. What will be the output printed to the console?
Kafka
StreamsBuilder builder = new StreamsBuilder(); KStream<String, String> clicks = builder.stream("clicks-topic"); KTable<Windowed<String>, Long> clickCounts = clicks .groupByKey() .windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(1))) .count(); clickCounts.toStream().foreach((windowedUser, count) -> { System.out.println(windowedUser.key() + "@" + windowedUser.window().start() + ": " + count); });
Attempts:
2 left
💡 Hint
Think about how windowed aggregations include window info in the key.
✗ Incorrect
The code groups clicks by user key and counts them in 1-minute windows. The output includes the user ID and the window start timestamp, showing counts per window.
🧠 Conceptual
intermediate1:30remaining
Why use advanced Kafka patterns like exactly-once semantics?
Which reason best explains why advanced Kafka patterns such as exactly-once semantics are important in complex data flows?
Attempts:
2 left
💡 Hint
Think about data accuracy and consistency in important systems.
✗ Incorrect
Exactly-once semantics guarantee that each message is processed once and only once, which is crucial for systems like payments where duplicates cause errors.
🔧 Debug
advanced2:30remaining
Identify the error in this Kafka Streams topology code
This code snippet attempts to join two streams but throws an exception at runtime. What is the cause?
Kafka
StreamsBuilder builder = new StreamsBuilder(); KStream<String, String> stream1 = builder.stream("topic1"); KStream<String, String> stream2 = builder.stream("topic2"); KStream<String, String> joined = stream1.join(stream2, (v1, v2) -> v1 + ":" + v2, JoinWindows.ofTimeDifferenceWithNoGrace(Duration.ofSeconds(30)) ); joined.to("joined-topic");
Attempts:
2 left
💡 Hint
Check if serializers and deserializers are properly set for keys and values.
✗ Incorrect
Kafka Streams requires proper Serdes for keys and values to serialize data during joins. Missing or wrong Serdes cause runtime serialization exceptions.
📝 Syntax
advanced2:00remaining
Which option correctly defines a Kafka Streams topology with a filter and map operation?
Select the code snippet that compiles and runs without errors, applying a filter and map on a KStream.
Attempts:
2 left
💡 Hint
Check lambda syntax and return types for filter and map.
✗ Incorrect
Option A uses correct lambda syntax with parentheses and returns a KeyValue pair in map. Others have syntax errors or wrong return types.
🚀 Application
expert3:00remaining
How does the Kafka Streams Processor API handle complex event flows differently from the DSL?
Which statement best describes the advantage of using the Processor API over the DSL for complex Kafka event processing?
Attempts:
2 left
💡 Hint
Think about customization and control in processing pipelines.
✗ Incorrect
Processor API lets developers write custom processors with direct access to state stores and event context, enabling complex logic not possible with DSL.