0
0
Kafkadevops~10 mins

Event sourcing pattern in Kafka - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Event sourcing pattern
Command Received
Create Event
Append Event to Kafka Topic
Event Stored
Update State by Replaying Events
Respond to Query or Next Command
The event sourcing pattern stores all changes as events in Kafka, then rebuilds state by replaying these events.
Execution Sample
Kafka
producer.send('orders', orderCreatedEvent)
consumer.subscribe(['orders'])
for event in consumer:
    state.apply(event)
    print(state)
This code sends an event to Kafka and a consumer reads events to update the state.
Process Table
StepActionEvent ProducedKafka Topic StateState After EventOutput
1Receive new order commandorderCreatedEvent[]{}No output
2Produce event to KafkaorderCreatedEvent[orderCreatedEvent]{}No output
3Consumer reads orderCreatedEventorderCreatedEvent[orderCreatedEvent]{}No output
4Apply event to stateorderCreatedEvent[orderCreatedEvent]{orderId: 123, status: 'created'}{orderId: 123, status: 'created'}
5Print updated stateorderCreatedEvent[orderCreatedEvent]{orderId: 123, status: 'created'}{orderId: 123, status: 'created'}
6Receive update commandorderUpdatedEvent[orderCreatedEvent]{orderId: 123, status: 'created'}No output
7Produce update eventorderUpdatedEvent[orderCreatedEvent, orderUpdatedEvent]{orderId: 123, status: 'created'}No output
8Consumer reads orderUpdatedEventorderUpdatedEvent[orderCreatedEvent, orderUpdatedEvent]{orderId: 123, status: 'created'}No output
9Apply update event to stateorderUpdatedEvent[orderCreatedEvent, orderUpdatedEvent]{orderId: 123, status: 'shipped'}{orderId: 123, status: 'shipped'}
10Print updated stateorderUpdatedEvent[orderCreatedEvent, orderUpdatedEvent]{orderId: 123, status: 'shipped'}{orderId: 123, status: 'shipped'}
11No more events-[orderCreatedEvent, orderUpdatedEvent]{orderId: 123, status: 'shipped'}Stop processing
💡 No more events to process, state fully updated
Status Tracker
VariableStartAfter 1After 2After 3After 4Final
Kafka Topic[][orderCreatedEvent][orderCreatedEvent][orderCreatedEvent, orderUpdatedEvent][orderCreatedEvent, orderUpdatedEvent][orderCreatedEvent, orderUpdatedEvent]
State{}{}{orderId: 123, status: 'created'}{orderId: 123, status: 'created'}{orderId: 123, status: 'shipped'}{orderId: 123, status: 'shipped'}
Key Moments - 3 Insights
Why do we store events instead of just the final state?
Because the execution_table shows events are appended to Kafka and state is rebuilt by applying each event, storing events lets us track all changes and rebuild state anytime.
How does the state get updated after producing an event?
The consumer reads the event from Kafka (see steps 3 and 8) and applies it to the state, updating it step-by-step.
What happens if the consumer crashes and restarts?
It can replay all events from Kafka topic (the full event list in variable_tracker) to rebuild the current state exactly as before.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 4, what is the state after applying the first event?
A{orderId: 123, status: 'created'}
B{}
C{orderId: 123, status: 'shipped'}
D[]
💡 Hint
Check the 'State After Event' column at step 4 in execution_table
At which step does the Kafka topic contain two events?
AStep 2
BStep 4
CStep 7
DStep 10
💡 Hint
Look at the 'Kafka Topic State' column to see when two events are present
If the consumer did not apply events, what would the state be at the end?
A{orderId: 123, status: 'shipped'}
B{}
C{orderId: 123, status: 'created'}
D[]
💡 Hint
Refer to variable_tracker for 'State' variable changes after applying events
Concept Snapshot
Event sourcing stores all changes as events in Kafka.
Commands produce events appended to Kafka topics.
Consumers read events and update state by replaying them.
State is rebuilt from event history, not stored directly.
This allows full audit and recovery by replaying events.
Full Transcript
Event sourcing pattern means every change is saved as an event in Kafka. When a command comes, it creates an event and sends it to Kafka. The event is stored in a Kafka topic. A consumer reads these events one by one and updates the application state by applying each event. This way, the current state is always rebuilt from the list of events. If the consumer stops, it can restart and replay all events to get the same state. This pattern helps keep a full history and makes recovery easy.