0
0
Kafkadevops~10 mins

Disaster recovery planning in Kafka - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to set the replication factor for a Kafka topic to ensure data redundancy.

Kafka
kafka-topics --create --topic my-topic --partitions 3 --replication-factor [1] --bootstrap-server localhost:9092
Drag options to blanks, or click blank then click option'
A3
B1
C0
D5
Attempts:
3 left
💡 Hint
Common Mistakes
Setting replication factor to 1 disables redundancy.
Using 0 causes an error.
2fill in blank
medium

Complete the code to enable log compaction for a Kafka topic to help with disaster recovery.

Kafka
kafka-configs --alter --entity-type topics --entity-name my-topic --add-config [1]=compact --bootstrap-server localhost:9092
Drag options to blanks, or click blank then click option'
Amin.insync.replicas
Bcompression.type
Cretention.ms
Dcleanup.policy
Attempts:
3 left
💡 Hint
Common Mistakes
Using retention.ms instead affects log retention time.
Compression.type controls message compression, not compaction.
3fill in blank
hard

Fix the error in the command to describe the configuration of a Kafka topic for disaster recovery.

Kafka
kafka-topics --describe --topic [1] --bootstrap-server localhost:9092
Drag options to blanks, or click blank then click option'
Adescribe
Btopic-name
Cmy-topic
Dlocalhost
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'describe' as a topic name.
Using 'localhost' instead of a topic name.
4fill in blank
hard

Fill both blanks to create a Kafka consumer group with a specific group id and enable auto commit for disaster recovery.

Kafka
kafka-console-consumer --topic my-topic --bootstrap-server localhost:9092 --group [1] --enable-auto-commit [2]
Drag options to blanks, or click blank then click option'
Arecovery-group
Btrue
Cfalse
Ddefault-group
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'false' disables auto commit, risking offset loss.
Using generic group ids may cause conflicts.
5fill in blank
hard

Fill all three blanks to configure a Kafka producer with retries, acks, and idempotence for disaster recovery.

Kafka
kafka-console-producer --topic my-topic --bootstrap-server localhost:9092 --producer-property retries=[1] --producer-property acks=[2] --producer-property enable.idempotence=[3]
Drag options to blanks, or click blank then click option'
A0
Ball
Ctrue
D1
Attempts:
3 left
💡 Hint
Common Mistakes
Setting retries to 0 disables retries.
Using acks=1 risks data loss if leader fails.
Disabling idempotence can cause duplicates.