Consider a Kafka consumer configured with the following settings:
max.poll.records=500
fetch.min.bytes=1MB
fetch.max.wait.ms=500
If the consumer processes 1000 messages per second, what is the expected throughput in MB per second assuming each message is 1KB?
Think about how message size and processing rate relate to throughput.
Each message is 1KB, so 1000 messages equal 1000 KB = 1 MB per second throughput.
Which configuration option should you increase to allow the Kafka consumer to fetch more data per request and improve throughput?
Look for the setting that controls how many records are fetched in one poll.
max.poll.records controls the maximum number of records returned in a single poll, increasing it can improve throughput by processing larger batches.
Review the following Kafka consumer code snippet:
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
processRecord(record);
consumer.commitSync();
}
}What is the main reason this code causes low throughput?
Think about how committing offsets affects throughput.
Calling commitSync() inside the loop commits after every message, causing overhead and reducing throughput. Committing once per batch is better.
Choose the correct Java code snippet to set the fetch.min.bytes to 1MB and fetch.max.wait.ms to 500ms in Kafka consumer properties.
Kafka consumer properties expect string values for configuration.
Kafka consumer configs require string values, so numbers must be passed as strings.
A Kafka consumer is configured with max.poll.records=1000 and fetch.min.bytes=500000. The producer sends messages of size 1KB each. If the broker has 2000 messages ready, how many messages will the consumer receive in one poll?
Consider max.poll.records as the upper limit per poll.
The consumer will receive up to max.poll.records (1000) messages per poll, even if more are available.