Time to live (TTL) for expiration in AWS - Time & Space Complexity
We want to understand how the time it takes to expire items using TTL changes as we add more items.
How does the system handle expiration when many items have TTL set?
Analyze the time complexity of setting TTL on multiple items in a DynamoDB table.
// Example: Enable TTL and update items with expiration timestamps
aws dynamodb update-time-to-live --table-name MyTable --time-to-live-specification Enabled=true,AttributeName=ttl
for each item in items:
aws dynamodb update-item --table-name MyTable --key '{"id": {"S": "' + item.id + '"}}' --update-expression "SET ttl = :expiry" --expression-attribute-values '{":expiry": {"N": "' + item.expiry + '"}}'
This sequence enables TTL on a table and updates each item with its expiration time.
We look at what repeats as we add more items.
- Primary operation: Updating each item with a TTL attribute.
- How many times: Once per item in the dataset.
Each item requires one update call to set its TTL. More items mean more update calls.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 update calls |
| 100 | 100 update calls |
| 1000 | 1000 update calls |
Pattern observation: The number of update calls grows directly with the number of items.
Time Complexity: O(n)
This means the time to set TTL grows linearly as you add more items.
[X] Wrong: "Setting TTL on one item automatically expires all items."
[OK] Correct: Each item must have its own TTL set; expiration is per item, not global.
Understanding how TTL operations scale helps you design systems that handle data expiration efficiently as data grows.
"What if we batch update items with TTL instead of one by one? How would the time complexity change?"