0
0
HLDsystem_design~20 mins

Throughput, latency, and availability in HLD - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Throughput, Latency & Availability Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Throughput in a Web Service

Imagine a web service that handles user requests. Which of the following best describes throughput?

AThe number of requests the service can handle per second.
BThe time it takes for a single request to be processed.
CThe percentage of time the service is operational.
DThe amount of data transferred in a single request.
Attempts:
2 left
💡 Hint

Think about how many requests the system can complete over time.

Architecture
intermediate
2:00remaining
Reducing Latency in a Distributed System

You have a distributed system with multiple servers across regions. Which architectural choice will most effectively reduce latency for users?

AUse a single powerful server to handle all requests.
BDeploy servers closer to users geographically.
CStore all data in a central database far from users.
DIncrease the number of servers in a single data center.
Attempts:
2 left
💡 Hint

Latency depends on the time data takes to travel between user and server.

scaling
advanced
2:00remaining
Scaling for High Availability

Your system must maintain high availability even during server failures. Which design choice best supports this?

ASchedule maintenance during peak hours to fix issues quickly.
BUse a single server with a powerful CPU and memory.
CUse multiple redundant servers with automatic failover.
DStore backups only once per week to save storage space.
Attempts:
2 left
💡 Hint

Think about how to keep the system running if one server stops working.

tradeoff
advanced
2:00remaining
Tradeoff Between Latency and Throughput

In a system design, increasing throughput sometimes increases latency. Which scenario best explains this tradeoff?

AAdding more servers to handle requests in parallel.
BProcessing each request immediately, reducing latency but lowering throughput.
CUsing caching to serve requests faster without affecting throughput.
DBatch processing many requests together to improve throughput but causing delay for each request.
Attempts:
2 left
💡 Hint

Think about how grouping requests affects speed and quantity.

estimation
expert
2:00remaining
Estimating System Availability

A system has 3 independent servers, each with 99.9% uptime. What is the approximate overall system availability if it requires at least one server to be up?

A99.9999%
B99.999%
C99.0%
D99.7%
Attempts:
2 left
💡 Hint

Calculate the chance all servers fail simultaneously, then subtract from 100%.