Complete the code to set the replication factor for a Kafka topic to ensure data redundancy.
kafka-topics --create --topic my-topic --partitions 3 --replication-factor [1] --bootstrap-server localhost:9092
Setting the replication factor to 3 ensures that each partition has 3 copies across brokers, which helps in disaster recovery.
Complete the code to enable log compaction for a Kafka topic to help with disaster recovery.
kafka-configs --alter --entity-type topics --entity-name my-topic --add-config [1]=compact --bootstrap-server localhost:9092
Setting cleanup.policy to compact enables log compaction, which helps in disaster recovery.
Fix the error in the command to describe the configuration of a Kafka topic for disaster recovery.
kafka-topics --describe --topic [1] --bootstrap-server localhost:9092
The topic name must be specified correctly to describe its configuration. 'my-topic' is the example topic used.
Fill both blanks to create a Kafka consumer group with a specific group id and enable auto commit for disaster recovery.
kafka-console-consumer --topic my-topic --bootstrap-server localhost:9092 --group [1] --enable-auto-commit [2]
Using a specific group id like 'recovery-group' helps track consumer offsets. Enabling auto commit with 'true' allows Kafka to save offsets automatically for recovery.
Fill all three blanks to configure a Kafka producer with retries, acks, and idempotence for disaster recovery.
kafka-console-producer --topic my-topic --bootstrap-server localhost:9092 --producer-property retries=[1] --producer-property acks=[2] --producer-property enable.idempotence=[3]
Setting retries to 1 allows retrying failed sends. Setting acks to 'all' waits for all replicas to acknowledge. Enabling idempotence with 'true' prevents duplicate messages, all important for disaster recovery.