Why cloud over on-premises in AWS - Performance Analysis
We want to understand how the time it takes to manage computing resources changes when using cloud services versus on-premises setups.
Specifically, how does the effort grow as the number of servers or applications increases?
Analyze the time complexity of provisioning servers on AWS compared to on-premises.
// AWS example: Provisioning servers using EC2
for (int i = 0; i < n; i++) {
aws.ec2.runInstances({
ImageId: 'ami-123456',
InstanceType: 't3.micro',
MinCount: 1,
MaxCount: 1
});
}
// On-premises: Manually setting up each server one by one
for (int i = 0; i < n; i++) {
setupPhysicalServer();
}
This sequence shows provisioning n servers either by calling AWS API or manually setting up physical servers.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Provisioning one server instance (API call for cloud, manual setup for on-premises)
- How many times: Exactly n times, once per server
As the number of servers n increases, the total provisioning effort grows roughly in direct proportion.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 provisioning calls |
| 100 | 100 provisioning calls |
| 1000 | 1000 provisioning calls |
Pattern observation: Doubling the number of servers doubles the provisioning operations needed.
Time Complexity: O(n)
This means the time to provision servers grows linearly with the number of servers.
[X] Wrong: "Provisioning more servers in the cloud takes the same time as provisioning one server."
[OK] Correct: Each server requires its own setup call, so time grows with the number of servers, even if cloud automates some steps.
Understanding how provisioning time scales helps you explain the benefits of cloud automation and why it can be faster and easier than on-premises, even if both grow linearly.
"What if we used server templates or auto-scaling groups in the cloud? How would the time complexity change?"