0
0
Kafkadevops~20 mins

Exactly-once semantics (EOS) in Kafka - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Kafka EOS Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of this Kafka producer code snippet with EOS enabled?

Consider the following Kafka producer code snippet configured for exactly-once semantics (EOS). What will be the output printed after sending the message?

Kafka
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("enable.idempotence", "true");
props.put("acks", "all");
props.put("transactional.id", "txn-1");

KafkaProducer<String, String> producer = new KafkaProducer<>(props);
producer.initTransactions();

try {
    producer.beginTransaction();
    producer.send(new ProducerRecord<>("my-topic", "key1", "value1"));
    producer.commitTransaction();
    System.out.println("Message sent successfully");
} catch (ProducerFencedException | OutOfOrderSequenceException | AuthorizationException e) {
    producer.close();
    System.out.println("Fatal error, producer closed");
} catch (KafkaException e) {
    producer.abortTransaction();
    System.out.println("Transaction aborted");
}
AFatal error, producer closed
BNo output printed
CTransaction aborted
DMessage sent successfully
Attempts:
2 left
💡 Hint

Think about what happens when the transaction commits successfully without exceptions.

Predict Output
intermediate
2:00remaining
What error does this Kafka consumer code raise when using EOS incorrectly?

Given the following Kafka consumer code snippet that attempts to read committed messages with isolation level set incorrectly, what error will it raise?

Kafka
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "group1");
props.put("enable.auto.commit", "false");
props.put("isolation.level", "read_uncommitted");

KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList("my-topic"));

while (true) {
    ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
    for (ConsumerRecord<String, String> record : records) {
        System.out.println(record.value());
    }
}
ANo error, messages may include uncommitted data
Borg.apache.kafka.common.errors.InvalidConfigurationException
Corg.apache.kafka.clients.consumer.CommitFailedException
Dorg.apache.kafka.common.errors.TransactionAbortedException
Attempts:
2 left
💡 Hint

Consider what happens when the isolation level is set to read_uncommitted.

🧠 Conceptual
advanced
2:00remaining
Which Kafka feature ensures exactly-once semantics in stream processing?

Which Kafka feature is primarily responsible for enabling exactly-once semantics (EOS) in Kafka Streams applications?

AConsumer offset auto-commit
BIdempotent producer combined with transactional writes
CPartition key hashing
DLog compaction
Attempts:
2 left
💡 Hint

Think about how Kafka prevents duplicate writes and partial commits.

Predict Output
advanced
2:00remaining
What is the output of this Kafka Streams code with EOS enabled?

Given the following Kafka Streams code snippet with exactly-once processing enabled, what will be the output printed?

Kafka
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "app1");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE_V2);

StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> input = builder.stream("input-topic");

input.mapValues(value -> value.toUpperCase())
     .to("output-topic");

KafkaStreams streams = new KafkaStreams(builder.build(), props);
streams.start();

System.out.println("Streams started with EOS");
AStreams started with EOS
BStreams started without EOS
CRuntimeException due to missing transactional.id
DNo output printed
Attempts:
2 left
💡 Hint

Check the processing guarantee configuration and what the code prints.

🧠 Conceptual
expert
2:00remaining
What happens if a Kafka producer with EOS loses its transactional.id?

In Kafka exactly-once semantics, what is the consequence if a producer loses or changes its transactional.id between restarts?

AThe consumer offsets are reset automatically to earliest
BThe producer continues normally without any impact on EOS guarantees
CThe producer is fenced and cannot commit previous transactions, preventing duplicates
DThe broker deletes all previous transactional data for that producer
Attempts:
2 left
💡 Hint

Consider how Kafka prevents duplicate transactions from multiple producers with the same transactional.id.