Lambda trigger on stream events in DynamoDB - Time & Space Complexity
When DynamoDB streams trigger a Lambda function, the time complexity depends on how many records the function processes each time it runs.
We want to understand how the execution time grows as the number of stream records increases.
Analyze the time complexity of the following Lambda function triggered by DynamoDB stream events.
exports.handler = async (event) => {
for (const record of event.Records) {
// Process each record
console.log('Processing record:', record.eventID);
// Imagine some processing here
}
};
This code loops through each record in the stream event and processes it one by one.
Look at what repeats as input grows.
- Primary operation: Looping through each record in the event.Records array.
- How many times: Once for each record received in the stream batch.
As the number of records increases, the function processes more items one after another.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 processing steps |
| 100 | 100 processing steps |
| 1000 | 1000 processing steps |
Pattern observation: The work grows directly with the number of records; doubling records doubles the work.
Time Complexity: O(n)
This means the time to process grows linearly with the number of stream records.
[X] Wrong: "The Lambda runs in constant time no matter how many records it gets."
[OK] Correct: Each record needs processing, so more records mean more work and longer execution.
Understanding how Lambda functions scale with input size helps you design efficient event-driven systems and answer questions about performance in real projects.
"What if the Lambda function processes records in parallel instead of one by one? How would the time complexity change?"