0
0
KafkaDebug / FixIntermediate · 4 min read

How to Handle Slow Consumer in Kafka: Causes and Fixes

A slow consumer in Kafka happens when the consumer cannot keep up with the rate of incoming messages, causing lag. To handle this, increase consumer processing speed, adjust consumer configurations like fetch.min.bytes and max.poll.records, or scale out consumers in a consumer group.
🔍

Why This Happens

A slow consumer occurs when the consumer application processes messages slower than Kafka produces them. This causes the consumer to lag behind, increasing the offset difference and potentially leading to resource pressure on Kafka brokers.

Common causes include heavy message processing logic, small max.poll.records limiting batch size, or network delays.

java
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test-group");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("my-topic"));

while (true) {
    ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
    for (ConsumerRecord<String, String> record : records) {
        // Heavy processing causing delay
        try {
            Thread.sleep(500); // Simulates slow processing
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            break;
        }
        System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
    }
}
Output
Consumer lags behind; offsets increase; no error but processing is slow and backlog grows.
🔧

The Fix

To fix slow consumers, increase max.poll.records to process more messages per poll, reduce processing time per message, or use multiple consumers in the same group to share load. Also, tune fetch.min.bytes and fetch.max.wait.ms to optimize data fetch size and latency.

java
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test-group");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("max.poll.records", "500"); // Increased batch size

KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("my-topic"));

while (true) {
    ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
    for (ConsumerRecord<String, String> record : records) {
        // Optimized processing
        System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
    }
}
Output
Consumer processes messages faster; lag reduces; offsets stay close to latest.
🛡️

Prevention

To prevent slow consumer issues, follow these best practices:

  • Use multiple consumers in a consumer group to share the load.
  • Optimize message processing logic to be fast and non-blocking.
  • Tune consumer configs like max.poll.records, fetch.min.bytes, and fetch.max.wait.ms for your workload.
  • Monitor consumer lag regularly using Kafka tools or metrics.
  • Consider increasing partitions to allow more parallelism.
⚠️

Related Errors

Similar issues include:

  • Consumer group rebalancing too often: Caused by long processing times; fix by reducing processing or increasing session timeout.
  • Offset commit failures: Happens if consumer is too slow or crashes; fix by committing offsets properly and handling exceptions.
  • OutOfMemory errors: When consumer buffers grow due to slow processing; fix by tuning fetch sizes and processing speed.

Key Takeaways

Slow consumers cause lag by processing messages slower than Kafka produces them.
Increase max.poll.records and optimize processing to handle messages faster.
Use multiple consumers in a group to share the load and reduce lag.
Tune consumer fetch settings and monitor lag to prevent slow consumer issues.
Optimize processing logic and increase partitions for better parallelism.