Stream processing patterns in DynamoDB - Time & Space Complexity
When working with DynamoDB streams, it is important to understand how processing time changes as more data flows through the stream.
We want to know how the time to handle stream records grows as the number of records increases.
Analyze the time complexity of the following DynamoDB stream processing code.
// Pseudocode for processing DynamoDB stream records
function processStreamRecords(records) {
for (const record of records) {
if (record.eventName === 'INSERT') {
// Process new item
handleInsert(record.dynamodb.NewImage);
} else if (record.eventName === 'MODIFY') {
// Process updated item
handleModify(record.dynamodb.NewImage);
}
}
}
This code loops through each stream record and processes it based on the event type.
Look at what repeats as the input grows.
- Primary operation: Looping through each stream record once.
- How many times: Once per record in the stream batch.
As the number of records increases, the processing time grows proportionally.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 processing steps |
| 100 | 100 processing steps |
| 1000 | 1000 processing steps |
Pattern observation: Doubling the number of records roughly doubles the work done.
Time Complexity: O(n)
This means the time to process stream records grows directly with the number of records.
[X] Wrong: "Processing one record takes constant time, so processing many records is still constant time overall."
[OK] Correct: Each record requires its own processing step, so more records mean more total work.
Understanding how stream processing scales helps you design systems that handle growing data smoothly and reliably.
"What if we batch process records in groups instead of one by one? How would the time complexity change?"