Bird
0
0
LLDsystem_design~10 mins

Observer pattern in LLD - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Observer pattern
Growth Table: Observer Pattern Scaling
Users/ObserversEvents per SecondNotifications SentLatency ImpactResource Usage
100 observers1000 events100,000 notificationsLow latency (~ms)Low CPU & memory
10,000 observers10,000 events100 million notificationsModerate latency (tens of ms)High CPU, memory, network
1,000,000 observers100,000 events100 billion notificationsHigh latency (seconds)Very high CPU, memory, network
100,000,000 observers1,000,000 events100 trillion notificationsUnusable latency (minutes+)Extremely high resource usage, system overload
First Bottleneck

The first bottleneck is the notification dispatch system. As the number of observers grows, sending updates to all observers becomes expensive in CPU, memory, and network bandwidth. The system struggles to handle the volume of notifications in real-time.

Scaling Solutions
  • Batch notifications: Group multiple events before notifying observers to reduce message count.
  • Hierarchical observers: Use intermediate aggregators to reduce direct notifications.
  • Asynchronous messaging: Use message queues or event buses to decouple event generation and notification delivery.
  • Filtering: Notify only observers interested in specific event types to reduce unnecessary notifications.
  • Horizontal scaling: Distribute notification dispatch across multiple servers.
  • Caching: Cache event states to avoid redundant notifications.
Back-of-Envelope Cost Analysis

At 10,000 observers and 10,000 events/sec:

  • Notifications per second = 10,000 events * 10,000 observers = 100 million notifications/sec
  • Assuming 1 KB per notification, bandwidth needed = 100 million KB/sec ≈ 95 GB/sec (unrealistic for single server)
  • CPU and memory usage to serialize and send notifications will be very high.
  • Storage needed depends on event persistence; if storing all events, requires large scalable storage.
Interview Tip

Start by explaining the basic observer pattern and its use case. Then discuss how scaling the number of observers and events affects system resources. Identify the bottleneck clearly and propose practical solutions like batching, filtering, and asynchronous messaging. Use real numbers to show understanding of system limits.

Self Check

Your notification system handles 1000 QPS with 1000 observers. Traffic grows 10x to 10,000 QPS. What do you do first?

Answer: Implement batching or filtering to reduce the number of notifications sent per event, or introduce asynchronous messaging to decouple event generation from notification delivery. This reduces CPU, memory, and network load before scaling hardware.

Key Result
The observer pattern scales poorly with large numbers of observers due to notification dispatch overhead; batching, filtering, and asynchronous messaging are key to scaling.