Serverless vs container decision in AWS - Performance Comparison
When choosing between serverless and containers, it's important to understand how the time to handle tasks grows as workload increases.
We want to see how the number of operations or API calls changes when we add more tasks.
Analyze the time complexity of deploying and running tasks using serverless functions versus containers.
// Serverless example
for each event in events:
invoke Lambda function(event)
// Container example
for each task in tasks:
schedule task on ECS cluster
This sequence shows invoking a serverless function for each event versus scheduling tasks on a container cluster.
Look at what repeats as workload grows.
- Primary operation: Invoking a Lambda function or scheduling a container task.
- How many times: Once per event or task, so it grows with the number of events/tasks.
As the number of events or tasks increases, the number of function invocations or container schedules increases proportionally.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 Lambda invocations or 10 container task schedules |
| 100 | 100 Lambda invocations or 100 container task schedules |
| 1000 | 1000 Lambda invocations or 1000 container task schedules |
Pattern observation: The number of operations grows directly with the number of tasks or events.
Time Complexity: O(n)
This means the time or number of operations grows linearly as the number of tasks or events increases.
[X] Wrong: "Serverless automatically handles all tasks instantly, so time doesn't grow with more events."
[OK] Correct: Each event still triggers a separate function call, so total operations increase with more events.
Understanding how workload size affects operations helps you explain design choices clearly and shows you grasp cloud scaling basics.
What if we batch multiple events into a single serverless function call? How would the time complexity change?