0
0
Kafkadevops~20 mins

Configuration best practices in Kafka - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Kafka Configuration Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Kafka Producer Configuration Output
What will be the output of the following Kafka producer configuration code snippet when sending a message?
Kafka
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

KafkaProducer<String, String> producer = new KafkaProducer<>(props);
ProducerRecord<String, String> record = new ProducerRecord<>("my-topic", "key1", "value1");
producer.send(record, (metadata, exception) -> {
    if (exception == null) {
        System.out.println("Message sent to partition " + metadata.partition() + " with offset " + metadata.offset());
    } else {
        System.out.println("Error sending message: " + exception.getMessage());
    }
});
producer.close();
ANo output because producer.send is asynchronous and program exits immediately
BError sending message: UnknownTopicOrPartitionException
CMessage sent to partition -1 with offset -1
DMessage sent to partition 0 with offset 0
Attempts:
2 left
💡 Hint
Consider the default partitioning behavior and that the callback prints success or error.
Predict Output
intermediate
2:00remaining
Kafka Consumer Configuration Behavior
Given this Kafka consumer configuration, what will be the value of the consumer's group.id after creation?
Kafka
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
// Note: group.id is NOT set here

KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
String groupId = consumer.groupMetadata().groupId();
System.out.println(groupId);
AThrows IllegalStateException
Bdefault
Cempty string
Dnull
Attempts:
2 left
💡 Hint
group.id is required for consumers to join a group.
🧠 Conceptual
advanced
2:00remaining
Best Practice for Kafka Broker Configuration
Which of the following is the best practice for configuring Kafka broker's log retention to balance disk usage and data availability?
ASet log.retention.hours to a reasonable time based on business needs and monitor disk usage
BSet log.retention.bytes to a small value to delete logs quickly
CDisable log retention and manually delete logs when disk is full
DSet log.retention.hours to a very high value and never delete old logs
Attempts:
2 left
💡 Hint
Think about balancing data availability and disk space.
Predict Output
advanced
2:00remaining
Effect of Incorrect Serializer Configuration
What will happen if a Kafka producer is configured with a StringSerializer for the key but an IntegerSerializer for the value, but the value sent is a String?
Kafka
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.IntegerSerializer");

KafkaProducer<String, String> producer = new KafkaProducer<>(props);
ProducerRecord<String, String> record = new ProducerRecord<>("my-topic", "key1", "value1");
producer.send(record);
producer.close();
AMessage is sent successfully with value serialized as string
BMessage is sent but value is null
CSerializationException is thrown at runtime
DCompile-time error due to type mismatch
Attempts:
2 left
💡 Hint
Check if the serializer matches the data type sent.
🧠 Conceptual
expert
2:00remaining
Optimizing Kafka Consumer Throughput
Which configuration change is most effective to increase Kafka consumer throughput when processing large volumes of messages?
ASet session.timeout.ms to a very low value to detect failures faster
BSet max.poll.records to a higher number and increase fetch.min.bytes
CSet enable.auto.commit to false and commit offsets manually after each message
DSet auto.offset.reset to earliest to always read from beginning
Attempts:
2 left
💡 Hint
Think about batch size and fetch size for throughput.