SCAN for safe key iteration in Redis - Time & Space Complexity
When working with many keys in Redis, it's important to know how long it takes to look through them safely.
We want to understand how the time to scan keys grows as the number of keys increases.
Analyze the time complexity of the following Redis SCAN command usage.
local cursor = "0"
repeat
local result = redis.call('SCAN', cursor, 'COUNT', 100)
cursor = result[1]
local keys = result[2]
-- process keys here
until cursor == "0"
This code safely iterates over all keys in the database in small batches without blocking Redis.
Look for repeated actions in the code.
- Primary operation: Calling SCAN repeatedly to get batches of keys.
- How many times: The number of calls depends on total keys divided by batch size.
As the number of keys grows, the number of SCAN calls grows roughly in proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1 call (small number of keys) |
| 100 | About 1 call (batch size covers all keys) |
| 1000 | About 10 calls (100 keys per call) |
Pattern observation: The number of SCAN calls grows linearly with the number of keys divided by batch size.
Time Complexity: O(n)
This means the time to scan all keys grows roughly in direct proportion to how many keys there are.
[X] Wrong: "SCAN returns all keys in one call quickly."
[OK] Correct: SCAN returns keys in small batches to avoid blocking, so multiple calls are needed to get all keys.
Understanding how SCAN works and its time behavior shows you can handle large data safely without slowing down the system.
What if we changed the COUNT value to a much larger number? How would the time complexity change?