0
0
Kafkadevops~20 mins

Why advanced patterns handle complex flows in Kafka - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Kafka Advanced Flow Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of this Kafka Streams code snippet?
Consider this Kafka Streams Java code that processes a stream of user clicks and counts clicks per user in a 1-minute window. What will be the output printed to the console?
Kafka
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> clicks = builder.stream("clicks-topic");
KTable<Windowed<String>, Long> clickCounts = clicks
  .groupByKey()
  .windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(1)))
  .count();
clickCounts.toStream().foreach((windowedUser, count) -> {
  System.out.println(windowedUser.key() + "@" + windowedUser.window().start() + ": " + count);
});
APrints user IDs with counts for each 1-minute window, e.g., user1@1672531200000: 5
BPrints total counts per user without window info, e.g., user1: 5
CPrints counts but throws a runtime exception due to missing Serdes
DPrints counts but only for the last window, ignoring earlier ones
Attempts:
2 left
💡 Hint
Think about how windowed aggregations include window info in the key.
🧠 Conceptual
intermediate
1:30remaining
Why use advanced Kafka patterns like exactly-once semantics?
Which reason best explains why advanced Kafka patterns such as exactly-once semantics are important in complex data flows?
ATo allow messages to be lost safely without affecting downstream systems
BTo ensure messages are processed only once, avoiding duplicates in critical financial transactions
CTo reduce the number of partitions in a topic for simpler management
DTo speed up message delivery by skipping acknowledgments
Attempts:
2 left
💡 Hint
Think about data accuracy and consistency in important systems.
🔧 Debug
advanced
2:30remaining
Identify the error in this Kafka Streams topology code
This code snippet attempts to join two streams but throws an exception at runtime. What is the cause?
Kafka
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> stream1 = builder.stream("topic1");
KStream<String, String> stream2 = builder.stream("topic2");
KStream<String, String> joined = stream1.join(stream2,
  (v1, v2) -> v1 + ":" + v2,
  JoinWindows.ofTimeDifferenceWithNoGrace(Duration.ofSeconds(30))
);
joined.to("joined-topic");
AStreams must be grouped before joining, so join() call is invalid
BJoinWindows.ofTimeDifferenceWithNoGrace is deprecated and causes compile error
CMissing Serdes configuration causes serialization error during join
DThe join function must return a boolean, not a string
Attempts:
2 left
💡 Hint
Check if serializers and deserializers are properly set for keys and values.
📝 Syntax
advanced
2:00remaining
Which option correctly defines a Kafka Streams topology with a filter and map operation?
Select the code snippet that compiles and runs without errors, applying a filter and map on a KStream.
Abuilder.stream("input-topic").filter((k,v) -> v.length() > 3).map((k,v) -> KeyValue.pair(k, v.toUpperCase()));
Bbuilder.stream("input-topic").filter(k,v -> v.length() > 3).map((k,v) -> KeyValue.pair(k, v.toUpperCase()));
Cbuilder.stream("input-topic").filter((k,v) -> v.length() > 3).map((k,v) -> (k, v.toUpperCase()));
Dbuilder.stream("input-topic").filter((k,v) -> {v.length() > 3}).map((k,v) -> KeyValue.pair(k, v.toUpperCase()));
Attempts:
2 left
💡 Hint
Check lambda syntax and return types for filter and map.
🚀 Application
expert
3:00remaining
How does the Kafka Streams Processor API handle complex event flows differently from the DSL?
Which statement best describes the advantage of using the Processor API over the DSL for complex Kafka event processing?
AProcessor API disables fault tolerance to improve speed in complex flows
BProcessor API automatically optimizes joins and aggregations without user code
CProcessor API requires less code and is easier for simple filtering tasks
DProcessor API allows fine-grained control over processing logic and state management beyond DSL capabilities
Attempts:
2 left
💡 Hint
Think about customization and control in processing pipelines.