DynamoDB Streams overview in AWS - Time & Space Complexity
We want to understand how the time to process DynamoDB Streams changes as more data updates happen.
Specifically, how does the number of stream records affect processing time?
Analyze the time complexity of reading and processing DynamoDB Stream records.
// Assume a Lambda function triggered by DynamoDB Streams
exports.handler = async (event) => {
for (const record of event.Records) {
// Process each record
console.log('Processing record:', record.eventID);
}
};
This code processes each record from the DynamoDB Stream triggered by table changes.
Look at what repeats when processing stream records.
- Primary operation: Processing each stream record one by one.
- How many times: Once per record in the stream batch.
As the number of stream records grows, the processing time grows too.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 processing steps |
| 100 | 100 processing steps |
| 1000 | 1000 processing steps |
Pattern observation: The work grows directly with the number of records.
Time Complexity: O(n)
This means processing time increases linearly as the number of stream records increases.
[X] Wrong: "Processing one record takes the same time no matter how many records there are."
[OK] Correct: Each record adds more work, so total time grows with the number of records.
Understanding how processing scales with input size helps you design efficient event-driven systems and shows you can reason about real cloud workloads.
"What if the Lambda function batches multiple stream records together before processing? How would the time complexity change?"