Challenge - 5 Problems
Kafka Configuration Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Kafka Producer Configuration Output
What will be the output of the following Kafka producer configuration code snippet when sending a message?
Kafka
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("acks", "all"); props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms", 1); props.put("buffer.memory", 33554432); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); KafkaProducer<String, String> producer = new KafkaProducer<>(props); ProducerRecord<String, String> record = new ProducerRecord<>("my-topic", "key1", "value1"); producer.send(record, (metadata, exception) -> { if (exception == null) { System.out.println("Message sent to partition " + metadata.partition() + " with offset " + metadata.offset()); } else { System.out.println("Error sending message: " + exception.getMessage()); } }); producer.close();
Attempts:
2 left
💡 Hint
Consider the default partitioning behavior and that the callback prints success or error.
✗ Incorrect
The producer is configured correctly and sends a message to the topic 'my-topic'. Since no custom partitioner is set, the message goes to partition 0. The callback prints the partition and offset on success.
❓ Predict Output
intermediate2:00remaining
Kafka Consumer Configuration Behavior
Given this Kafka consumer configuration, what will be the value of the consumer's group.id after creation?
Kafka
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("enable.auto.commit", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); // Note: group.id is NOT set here KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props); String groupId = consumer.groupMetadata().groupId(); System.out.println(groupId);
Attempts:
2 left
💡 Hint
group.id is required for consumers to join a group.
✗ Incorrect
Kafka consumer requires a group.id to be set. If it is missing, the consumer will throw an IllegalStateException when trying to access group metadata.
🧠 Conceptual
advanced2:00remaining
Best Practice for Kafka Broker Configuration
Which of the following is the best practice for configuring Kafka broker's log retention to balance disk usage and data availability?
Attempts:
2 left
💡 Hint
Think about balancing data availability and disk space.
✗ Incorrect
Setting log retention to a reasonable time based on business needs ensures data is available for consumers while preventing disk from filling up. Monitoring disk usage helps adjust settings proactively.
❓ Predict Output
advanced2:00remaining
Effect of Incorrect Serializer Configuration
What will happen if a Kafka producer is configured with a StringSerializer for the key but an IntegerSerializer for the value, but the value sent is a String?
Kafka
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.IntegerSerializer"); KafkaProducer<String, String> producer = new KafkaProducer<>(props); ProducerRecord<String, String> record = new ProducerRecord<>("my-topic", "key1", "value1"); producer.send(record); producer.close();
Attempts:
2 left
💡 Hint
Check if the serializer matches the data type sent.
✗ Incorrect
The IntegerSerializer expects Integer values. Sending a String causes a SerializationException at runtime.
🧠 Conceptual
expert2:00remaining
Optimizing Kafka Consumer Throughput
Which configuration change is most effective to increase Kafka consumer throughput when processing large volumes of messages?
Attempts:
2 left
💡 Hint
Think about batch size and fetch size for throughput.
✗ Incorrect
Increasing max.poll.records allows the consumer to fetch more messages per poll. Increasing fetch.min.bytes makes the broker wait to accumulate more data before responding. Both improve throughput.