Lambda with DynamoDB Streams in AWS - Time & Space Complexity
When using Lambda with DynamoDB Streams, it's important to understand how the number of events affects processing time.
We want to know how the work grows as more data changes in the database.
Analyze the time complexity of the following operation sequence.
// Lambda function triggered by DynamoDB Stream events
exports.handler = async (event) => {
for (const record of event.Records) {
if (record.eventName === 'INSERT') {
// Process new item
await processNewItem(record.dynamodb.NewImage);
}
}
};
async function processNewItem(item) {
// Example processing logic
console.log('Processing item:', item);
}
This Lambda function processes each new item inserted into the DynamoDB table by reading stream records one by one.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Processing each record from the DynamoDB Stream inside the Lambda function.
- How many times: Once for every record in the event batch, which depends on the number of changes in DynamoDB.
As the number of new items inserted grows, the Lambda function processes more records one by one.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 processing calls |
| 100 | 100 processing calls |
| 1000 | 1000 processing calls |
Pattern observation: The number of processing steps grows directly with the number of stream records.
Time Complexity: O(n)
This means the processing time grows linearly with the number of new records in the stream.
[X] Wrong: "The Lambda function processes all records instantly regardless of how many there are."
[OK] Correct: Each record requires individual processing, so more records mean more work and longer total processing time.
Understanding how event-driven functions scale with input size helps you design efficient cloud applications and answer questions about system behavior under load.
"What if the Lambda function processed records in parallel instead of sequentially? How would the time complexity change?"