TTL with Streams for archival in DynamoDB - Time & Space Complexity
When using TTL with Streams for archival in DynamoDB, it's important to understand how the processing time changes as data grows.
We want to know how the time to handle expired items and archive them scales with the number of expired records.
Analyze the time complexity of the following DynamoDB TTL and Stream processing snippet.
// DynamoDB table with TTL enabled
// Stream triggers a Lambda on item expiration
exports.handler = async (event) => {
for (const record of event.Records) {
if (record.eventName === 'REMOVE') {
await archiveItem(record.dynamodb.OldImage);
}
}
};
async function archiveItem(item) {
// Save expired item to archival storage
}
This code listens to expired items removed by TTL and archives each one individually.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping over each expired record in the event stream.
- How many times: Once per expired item in the batch received from the stream.
As the number of expired items in the stream batch grows, the processing time grows proportionally.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 archive calls |
| 100 | 100 archive calls |
| 1000 | 1000 archive calls |
Pattern observation: The time grows linearly with the number of expired items processed.
Time Complexity: O(n)
This means the time to archive expired items grows directly in proportion to how many items expire at once.
[X] Wrong: "Processing expired items is constant time regardless of how many expire."
[OK] Correct: Each expired item triggers a separate archival operation, so more expired items mean more work.
Understanding how batch sizes affect processing time helps you design scalable data pipelines and handle real-world data flows confidently.
"What if we batch multiple expired items into a single archival call? How would the time complexity change?"