0
0
KafkaConceptBeginner · 4 min read

Event Sourcing with Kafka: What It Is and How It Works

Event sourcing with Kafka means storing all changes to an application's state as a sequence of events in Kafka topics. Instead of saving only the current state, you save every event that led to it, allowing you to rebuild the state anytime by replaying these events.
⚙️

How It Works

Imagine you keep a diary where you write down every action you take during the day instead of just noting your final mood. Event sourcing with Kafka works similarly: every change in your application is recorded as an event in Kafka. These events are stored in order, like pages in your diary.

When you want to know the current state, you read all the events from the start and apply them one by one. Kafka acts like a reliable diary that keeps all these events safe and in order. This way, you can always rebuild the exact state of your system by replaying the events.

This approach helps in tracking history, debugging, and recovering from errors because you have a full record of what happened, not just the end result.

💻

Example

This example shows how to produce and consume events in Kafka to implement event sourcing for a simple bank account balance.

python
from kafka import KafkaProducer, KafkaConsumer
import json

# Producer: send events representing deposits and withdrawals
producer = KafkaProducer(bootstrap_servers='localhost:9092', value_serializer=lambda v: json.dumps(v).encode('utf-8'))

# Events to send
events = [
    {'type': 'deposit', 'amount': 100},
    {'type': 'withdrawal', 'amount': 30},
    {'type': 'deposit', 'amount': 50}
]

for event in events:
    producer.send('bank-account-events', value=event)
producer.flush()

# Consumer: read events and rebuild balance
consumer = KafkaConsumer('bank-account-events', bootstrap_servers='localhost:9092',
                         value_deserializer=lambda m: json.loads(m.decode('utf-8')),
                         auto_offset_reset='earliest',
                         enable_auto_commit=True,
                         group_id='balance-calculator')

balance = 0
for message in consumer:
    event = message.value
    if event['type'] == 'deposit':
        balance += event['amount']
    elif event['type'] == 'withdrawal':
        balance -= event['amount']
    print(f"Processed event: {event}, Current balance: {balance}")
    if message.offset == 2:  # stop after last event
        break
Output
Processed event: {'type': 'deposit', 'amount': 100}, Current balance: 100 Processed event: {'type': 'withdrawal', 'amount': 30}, Current balance: 70 Processed event: {'type': 'deposit', 'amount': 50}, Current balance: 120
🎯

When to Use

Use event sourcing with Kafka when you need a full history of changes, not just the current state. This is helpful for auditing, debugging, and recovering data after failures.

Common real-world uses include financial systems like banking, where every transaction matters, or e-commerce platforms tracking orders and inventory changes. It also suits systems that require replaying events to rebuild state or to synchronize data across services.

Key Points

  • Event sourcing stores all changes as events, not just the final state.
  • Kafka is used as a durable, ordered event store.
  • You can rebuild system state anytime by replaying events.
  • It improves auditability, debugging, and fault recovery.
  • Best for systems needing full history and event replay.

Key Takeaways

Event sourcing with Kafka saves every change as an event in Kafka topics.
You rebuild application state by replaying these stored events in order.
Kafka provides a reliable, ordered, and durable event storage system.
This approach is ideal for systems needing full history and audit trails.
Use event sourcing when you want easy recovery and debugging from event logs.