Lambda with S3 event triggers in AWS - Time & Space Complexity
We want to understand how the time to process files grows when using Lambda triggered by S3 events.
Specifically, how does the number of files affect the number of Lambda executions?
Analyze the time complexity of the following operation sequence.
// S3 bucket configured to trigger Lambda on object creation
// Lambda function processes each new object
exports.handler = async (event) => {
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = record.s3.object.key;
// Process the object identified by bucket and key
}
};
This sequence shows Lambda triggered by S3 when new files are added, processing each file individually.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Lambda function invocation triggered by each S3 object creation event.
- How many times: Once per new object uploaded to the S3 bucket.
Each new file uploaded causes one Lambda invocation, so the total executions grow directly with the number of files.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 Lambda invocations |
| 100 | 100 Lambda invocations |
| 1000 | 1000 Lambda invocations |
Pattern observation: The number of Lambda calls grows linearly as more files are added.
Time Complexity: O(n)
This means the total Lambda executions increase directly in proportion to the number of files uploaded.
[X] Wrong: "One Lambda invocation can process all files at once, so time stays the same regardless of file count."
[OK] Correct: Each file upload triggers a separate event and Lambda invocation; they do not batch automatically.
Understanding how event-driven functions scale with input size helps you design systems that handle growth smoothly and predict costs.
"What if the Lambda function was triggered by a batch of S3 events instead of one per file? How would the time complexity change?"