Why serverless architecture matters in AWS - Performance Analysis
We want to see how the work done by serverless functions changes as more requests come in.
How does the number of function calls grow when the number of users or events increases?
Analyze the time complexity of the following operation sequence.
// AWS Lambda function triggered by events
exports.handler = async (event) => {
// Process each record in the event
for (const record of event.Records) {
await processRecord(record); // Calls another service or DB
}
return 'Done';
};
async function processRecord(record) {
// Simulate processing
return Promise.resolve();
}
This code runs a serverless function that processes each incoming event record one by one.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Processing each event record inside the Lambda function.
- How many times: Once per event record received in the batch.
As the number of event records increases, the function processes more records one after another.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 processRecord calls |
| 100 | 100 processRecord calls |
| 1000 | 1000 processRecord calls |
Pattern observation: The number of operations grows directly with the number of event records.
Time Complexity: O(n)
This means the work grows in a straight line as more events come in.
[X] Wrong: "The function runs once no matter how many events arrive."
[OK] Correct: Each event record triggers processing, so more records mean more work inside the function.
Understanding how serverless functions scale with input helps you design systems that handle growth smoothly.
"What if the function processed all records in parallel instead of one by one? How would the time complexity change?"