Write sharding in DynamoDB - Time & Space Complexity
When we write data to a DynamoDB table using sharding, we split the data across multiple partitions.
We want to understand how the time to write grows as we add more data and shards.
Analyze the time complexity of this write sharding example.
// Pseudocode for write sharding in DynamoDB
for each item in data:
shard_key = hash(item.id) % number_of_shards
dynamodb.put_item(TableName, Item with shard_key)
This code writes each item to a shard determined by hashing its ID.
Look at what repeats as data grows.
- Primary operation: Writing each item to DynamoDB with a shard key.
- How many times: Once per item in the data set.
As the number of items grows, the number of writes grows the same way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 writes to shards |
| 100 | 100 writes to shards |
| 1000 | 1000 writes to shards |
Pattern observation: The number of write operations grows directly with the number of items.
Time Complexity: O(n)
This means the time to write grows linearly as you add more items, even when using shards.
[X] Wrong: "Using more shards makes writing faster and reduces time complexity to constant."
[OK] Correct: While shards help distribute load, you still write each item once, so total time grows with data size.
Understanding how sharding affects write time helps you explain scaling strategies clearly and confidently.
What if we batch multiple writes together per shard? How would that change the time complexity?