Why clustering provides horizontal scaling in Redis - Performance Analysis
We want to see how adding more servers in a Redis cluster affects the work done when handling data.
How does splitting data across servers change the time to process requests?
Analyze the time complexity of these Redis cluster commands.
CLUSTER INFO
SET key1 value1
SET key2 value2
GET key1
GET key2
This code shows simple commands where keys are stored and retrieved in a Redis cluster.
Look at what repeats when we add more keys and servers.
- Primary operation: Storing or retrieving a key on the correct server.
- How many times: Once per key, but keys are spread across servers.
As we add more keys, each server handles fewer keys because data is split.
| Input Size (n keys) | Approx. Operations per Server |
|---|---|
| 10 keys, 1 server | 10 operations |
| 100 keys, 5 servers | ~20 operations each |
| 1000 keys, 10 servers | ~100 operations each |
Pattern observation: Adding servers splits the work, so each server does less work even if total keys grow.
Time Complexity: O(n / k)
This means the work per server grows with the number of keys divided by the number of servers.
[X] Wrong: "Adding more servers always makes the system twice as fast regardless of data size."
[OK] Correct: Because if data is small, splitting it adds overhead. Also, network and coordination costs affect speed.
Understanding how clustering spreads work helps you explain scaling in real systems. It shows you can think about how adding resources changes performance.
What if we added more servers but kept the same number of keys? How would the time complexity per server change?