0
0
Microservicessystem_design~10 mins

Three pillars (metrics, logs, traces) in Microservices - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Three pillars (metrics, logs, traces)
Growth Table: Scaling Observability in Microservices
Users/TrafficMetricsLogsTraces
100 usersBasic CPU, memory, request counts collected on few servicesLogs stored locally, simple text files, manual inspectionTraces sampled at low rate, few services instrumented
10K usersCentralized metrics collection with Prometheus or similar; alerting addedLogs shipped to central system (e.g., ELK stack); indexing startsDistributed tracing enabled on key services; sampling rate increased
1M usersHigh cardinality metrics; long-term storage; aggregation and downsamplingLogs volume grows; need log retention policies and archiving; indexing optimizedTraces collected for most requests; storage and query performance optimized
100M usersMetrics sharded and federated; multi-tenant isolation; advanced anomaly detectionLogs stored in scalable object storage; cold and hot storage tiers; AI-based log analysisTraces sampled intelligently; trace data linked with metrics and logs for root cause
First Bottleneck

At small scale, logs stored locally become hard to manage and search as volume grows.

At medium scale, centralized logging systems face storage and indexing bottlenecks due to high log volume.

At large scale, trace data storage and query performance degrade because traces are large and complex.

Overall, the first bottleneck is usually the logging infrastructure because logs grow fastest and require heavy indexing.

Scaling Solutions
  • Metrics: Use aggregation, downsampling, and sharding; employ time-series databases optimized for high cardinality.
  • Logs: Implement centralized log management with scalable storage (e.g., Elasticsearch clusters, cloud object storage); apply log retention and archiving policies; use indexing and compression.
  • Traces: Use sampling strategies to reduce volume; store traces in specialized databases; correlate traces with metrics and logs for efficient debugging.
  • General: Use horizontal scaling for collectors and storage; apply caching and tiered storage; automate alerting and anomaly detection.
Back-of-Envelope Cost Analysis

Assuming 1M users generating 10 requests/sec each:

  • Total requests: 10 million/sec
  • Metrics: 1-10 million data points/sec; requires high-throughput TSDB (e.g., Prometheus, Cortex)
  • Logs: Each request generates ~1KB logs -> ~10GB/sec raw logs; needs compression and tiered storage
  • Traces: Sampling 1% -> 100K traces/sec; each trace ~10KB -> ~1GB/sec storage
  • Network: High bandwidth needed for shipping logs and traces; consider local aggregation
Interview Tip

Structure your scalability discussion by:

  1. Explaining the role of each pillar (metrics, logs, traces) in observability.
  2. Describing how data volume grows with users and requests.
  3. Identifying bottlenecks in storage, indexing, and query performance.
  4. Suggesting concrete scaling solutions like sampling, sharding, and tiered storage.
  5. Discussing trade-offs between data fidelity and cost.
Self Check

Your database handles 1000 QPS for logs. Traffic grows 10x to 10,000 QPS. What do you do first?

Answer: Implement log sampling or filtering to reduce volume, then scale the logging database horizontally with sharding or add replicas to handle increased write load.

Key Result
Logging infrastructure is the first bottleneck as log volume grows fastest; scaling requires sampling, sharding, and tiered storage across metrics, logs, and traces.