Imagine a web service that handles user requests. Which of the following best describes throughput?
Think about how many requests the system can complete over time.
Throughput measures how many requests a system can process in a given time, usually per second.
You have a distributed system with multiple servers across regions. Which architectural choice will most effectively reduce latency for users?
Latency depends on the time data takes to travel between user and server.
Deploying servers closer to users reduces the distance data travels, lowering latency.
Your system must maintain high availability even during server failures. Which design choice best supports this?
Think about how to keep the system running if one server stops working.
Redundancy and automatic failover ensure the system stays available despite failures.
In a system design, increasing throughput sometimes increases latency. Which scenario best explains this tradeoff?
Think about how grouping requests affects speed and quantity.
Batching increases throughput by processing many requests at once but adds waiting time, increasing latency.
A system has 3 independent servers, each with 99.9% uptime. What is the approximate overall system availability if it requires at least one server to be up?
Calculate the chance all servers fail simultaneously, then subtract from 100%.
Availability = 1 - (probability all servers fail). Each server fails 0.1% = 0.001. All fail = 0.001^3 = 0.000000001. So availability ≈ 1 - 0.000000001 = 99.9999999%, closest option 99.9999% (A).