On-demand backups in DynamoDB - Time & Space Complexity
When we create on-demand backups in DynamoDB, we want to know how the time it takes changes as the size of the data grows.
We ask: How does the backup process time grow when the table has more data?
Analyze the time complexity of the following DynamoDB on-demand backup command.
aws dynamodb create-backup \
--table-name MusicCollection \
--backup-name MusicBackup2024
This command creates a full backup of the entire MusicCollection table at the moment it runs.
Look for repeated work inside the backup process.
- Primary operation: Reading all items from the table to copy them.
- How many times: Once for each item in the table.
As the number of items in the table grows, the backup time grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 item reads and copies |
| 100 | About 100 item reads and copies |
| 1000 | About 1000 item reads and copies |
Pattern observation: The time grows roughly in direct proportion to the number of items.
Time Complexity: O(n)
This means the backup time increases linearly as the table size grows.
[X] Wrong: "Creating an on-demand backup takes the same time no matter how big the table is."
[OK] Correct: The backup copies every item, so more items mean more work and more time.
Understanding how backup time grows helps you explain system behavior clearly and shows you can think about real-world data sizes.
"What if the table had indexes? How would backing up those indexes affect the time complexity?"