0
0
Kafkadevops~10 mins

Event sourcing pattern in Kafka - Commands & Configuration

Choose your learning style9 modes available
Introduction
Event sourcing is a way to save all changes to data as a sequence of events. Instead of saving only the current state, it keeps every change so you can rebuild the state anytime. This helps track history and recover data easily.
When you want to keep a full history of changes for audit or debugging.
When you need to rebuild the current state from past events after a failure.
When multiple services need to react to changes in data asynchronously.
When you want to decouple your data storage from business logic.
When you want to implement complex workflows that depend on event sequences.
Config File - server.properties
server.properties
broker.id=1
listeners=PLAINTEXT://:9092
log.dirs=/tmp/kafka-logs
num.partitions=3
log.retention.hours=168
zookeeper.connect=localhost:2181

This is a basic Kafka broker configuration file.

broker.id: Unique ID for this Kafka broker.

listeners: Network address where Kafka listens for connections.

log.dirs: Directory to store Kafka logs (events).

num.partitions: Number of partitions per topic, enabling parallelism.

log.retention.hours: How long Kafka keeps event logs before deleting.

zookeeper.connect: Address of Zookeeper managing Kafka cluster metadata.

Commands
Create a Kafka topic named 'event-sourcing-topic' with 3 partitions to store event streams for event sourcing.
Terminal
kafka-topics.sh --create --topic event-sourcing-topic --bootstrap-server localhost:9092 --partitions 3 --replication-factor 1
Expected OutputExpected
Created topic event-sourcing-topic.
--topic - Name of the Kafka topic to create
--partitions - Number of partitions for parallelism
--replication-factor - Number of copies of data for fault tolerance
Start a producer to send events (messages) to the 'event-sourcing-topic'. Each message represents a change in the system.
Terminal
kafka-console-producer.sh --topic event-sourcing-topic --bootstrap-server localhost:9092
Expected OutputExpected
No output (command runs silently)
--topic - Topic to send messages to
--bootstrap-server - Kafka server address
Start a consumer to read all events from the beginning of the 'event-sourcing-topic'. This lets you rebuild the current state by processing all past events.
Terminal
kafka-console-consumer.sh --topic event-sourcing-topic --bootstrap-server localhost:9092 --from-beginning
Expected OutputExpected
event1 event2 event3
--topic - Topic to read messages from
--from-beginning - Read all messages from the start
--bootstrap-server - Kafka server address
Key Concept

If you remember nothing else from this pattern, remember: store every change as an event so you can rebuild the current state anytime by replaying those events.

Common Mistakes
Creating a topic with only one partition
Limits parallel processing and scalability of event streams.
Create topics with multiple partitions to allow parallel consumers.
Not consuming events from the beginning
You miss past events needed to rebuild the full state.
Use the --from-beginning flag to read all events from the start.
Not setting proper log retention
Events may be deleted too soon, losing history needed for event sourcing.
Configure log.retention.hours to keep events long enough for your use case.
Summary
Create a Kafka topic with multiple partitions to store event streams.
Use a producer to send events representing changes to the topic.
Use a consumer with --from-beginning to read all events and rebuild state.