0
0
Kafkadevops~20 mins

Consumer throughput optimization in Kafka - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Kafka Throughput Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of this Kafka consumer throughput calculation?

Consider a Kafka consumer configured with the following settings:

max.poll.records=500
fetch.min.bytes=1MB
fetch.max.wait.ms=500

If the consumer processes 1000 messages per second, what is the expected throughput in MB per second assuming each message is 1KB?

AApproximately 500 MB/s
BApproximately 0.5 MB/s
CApproximately 2 MB/s
DApproximately 1 MB/s
Attempts:
2 left
💡 Hint

Think about how message size and processing rate relate to throughput.

🧠 Conceptual
intermediate
1:30remaining
Which Kafka consumer setting most directly improves throughput by increasing batch size?

Which configuration option should you increase to allow the Kafka consumer to fetch more data per request and improve throughput?

Aauto.offset.reset
Bsession.timeout.ms
Cmax.poll.records
Denable.auto.commit
Attempts:
2 left
💡 Hint

Look for the setting that controls how many records are fetched in one poll.

🔧 Debug
advanced
2:30remaining
Why does this Kafka consumer code cause low throughput?

Review the following Kafka consumer code snippet:

while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
processRecord(record);
consumer.commitSync();
}
}

What is the main reason this code causes low throughput?

ANot setting max.poll.records limits batch size to 1
BCalling commitSync() inside the loop causes frequent commits, slowing processing
CprocessRecord() is called outside the poll loop causing missed messages
DPolling with Duration.ofMillis(100) is too long and delays fetching
Attempts:
2 left
💡 Hint

Think about how committing offsets affects throughput.

📝 Syntax
advanced
2:00remaining
Which Kafka consumer configuration snippet correctly sets fetch size to optimize throughput?

Choose the correct Java code snippet to set the fetch.min.bytes to 1MB and fetch.max.wait.ms to 500ms in Kafka consumer properties.

Aprops.put("fetch.min.bytes", "1048576"); props.put("fetch.max.wait.ms", "500");
Bprops.put(fetch.min.bytes, 1048576); props.put(fetch.max.wait.ms, 500);
Cprops.put("fetch.min.bytes", 1048576); props.put("fetch.max.wait.ms", 500);
Dprops.put("fetch.min.bytes", 1_048_576); props.put("fetch.max.wait.ms", 500L);
Attempts:
2 left
💡 Hint

Kafka consumer properties expect string values for configuration.

🚀 Application
expert
3:00remaining
How many messages will be processed per poll with these settings?

A Kafka consumer is configured with max.poll.records=1000 and fetch.min.bytes=500000. The producer sends messages of size 1KB each. If the broker has 2000 messages ready, how many messages will the consumer receive in one poll?

A1000 messages
B500 messages
C2000 messages
DCannot determine without fetch.max.wait.ms
Attempts:
2 left
💡 Hint

Consider max.poll.records as the upper limit per poll.