Multi-tier architecture patterns in AWS - Time & Space Complexity
When building multi-tier architectures in the cloud, it's important to understand how the time to process requests grows as more users or data are involved.
We want to know how the system's work changes when the number of requests or components increases.
Analyze the time complexity of handling requests through a typical 3-tier AWS architecture.
// Simplified AWS multi-tier setup
// 1. Client sends request to Application Load Balancer (ALB)
// 2. ALB forwards request to EC2 instances in Auto Scaling Group (App tier)
// 3. EC2 instances query RDS database (Data tier)
// 4. RDS returns data to EC2 instances
// 5. EC2 instances respond back through ALB to client
This sequence shows how a request flows through the tiers to get processed and responded to.
Look at what happens repeatedly as requests increase.
- Primary operation: Each incoming request triggers API calls to the ALB, EC2 instances, and RDS database.
- How many times: Once per request, repeated for every user request.
As the number of requests (n) grows, each request goes through the same steps.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | About 10 ALB + 10 EC2 + 10 RDS calls |
| 100 | About 100 ALB + 100 EC2 + 100 RDS calls |
| 1000 | About 1000 ALB + 1000 EC2 + 1000 RDS calls |
Pattern observation: The total operations grow directly with the number of requests.
Time Complexity: O(n)
This means the work grows linearly with the number of requests; doubling requests roughly doubles the work.
[X] Wrong: "Adding more tiers will multiply the time complexity exponentially."
[OK] Correct: Each tier adds fixed steps per request, so the total work grows linearly, not exponentially.
Understanding how multi-tier systems scale with requests helps you design and explain cloud architectures clearly and confidently.
"What if we added caching between the app and data tiers? How would the time complexity change?"