Storage tier optimization in Azure - Time & Space Complexity
When we optimize storage tiers, we want to know how the time to move or access data changes as the amount of data grows.
We ask: How does the work increase when we handle more files or data size?
Analyze the time complexity of the following operation sequence.
// Move blobs from hot to cool tier based on last access date
var blobs = container.ListBlobs();
foreach (var blob in blobs) {
if (blob.Properties.LastAccessed < DateTime.UtcNow.AddDays(-30)) {
blob.SetTier(AccessTier.Cool);
}
}
This code checks each blob in a storage container and moves older blobs to a cooler, cheaper storage tier.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Checking each blob's last access date and setting its storage tier.
- How many times: Once for every blob in the container.
As the number of blobs grows, the number of checks and possible tier changes grows too.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | About 10 checks and possible tier changes |
| 100 | About 100 checks and possible tier changes |
| 1000 | About 1000 checks and possible tier changes |
Pattern observation: The work grows directly with the number of blobs.
Time Complexity: O(n)
This means the time to optimize storage tiers grows in a straight line as the number of blobs increases.
[X] Wrong: "Changing storage tiers happens instantly for all blobs at once."
[OK] Correct: Each blob must be checked and updated individually, so time grows with the number of blobs.
Understanding how operations scale with data size helps you design efficient cloud solutions and shows you think about real-world costs and delays.
"What if we batch update blobs instead of updating one by one? How would the time complexity change?"