Fargate serverless containers in AWS - Time & Space Complexity
We want to understand how the time to run tasks on Fargate changes as we increase the number of tasks.
How does adding more containers affect the total time to start and run them?
Analyze the time complexity of launching multiple Fargate tasks.
for task_number in range(n):
response = ecs.run_task(
cluster='myCluster',
launchType='FARGATE',
taskDefinition='myTaskDef',
networkConfiguration={
'awsvpcConfiguration': {
'subnets': ['subnet-12345'],
'assignPublicIp': 'ENABLED'
}
}
)
This code runs n Fargate tasks one after another, each starting a container in the cluster.
We look at what repeats as we increase n.
- Primary operation: The API call
ecs.run_taskto start a Fargate task. - How many times: Exactly n times, once per task.
Each task requires one API call to start. So if we double the tasks, we double the calls.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 calls |
| 100 | 100 calls |
| 1000 | 1000 calls |
Pattern observation: The number of API calls grows directly with the number of tasks.
Time Complexity: O(n)
This means the time to start tasks grows in a straight line as you add more tasks.
[X] Wrong: "Starting multiple Fargate tasks happens all at once, so time stays the same no matter how many tasks."
[OK] Correct: Each task requires a separate API call and provisioning, so total time grows with the number of tasks.
Understanding how task count affects time helps you design scalable systems and explain your choices clearly.
What if we launched tasks in parallel instead of one after another? How would the time complexity change?