Bird
0
0
LLDsystem_design~25 mins

Concurrency considerations in LLD - System Design Exercise

Choose your learning style9 modes available
Design: Concurrency Management System
Design focuses on concurrency control mechanisms and resource access management. Does not cover network protocols or UI design.
Functional Requirements
FR1: Allow multiple users or processes to access shared resources safely
FR2: Prevent data corruption due to simultaneous updates
FR3: Ensure system remains responsive under concurrent load
FR4: Support locking mechanisms to control access
FR5: Handle deadlocks and race conditions gracefully
Non-Functional Requirements
NFR1: Support up to 1000 concurrent users/processes
NFR2: Response time for resource access requests should be under 200ms
NFR3: System availability target is 99.9% uptime
NFR4: Minimal overhead added by concurrency controls
Think Before You Design
Questions to Ask
❓ Question 1
❓ Question 2
❓ Question 3
❓ Question 4
❓ Question 5
Key Components
Lock manager to grant and release locks
Resource manager to track resource states
Deadlock detection and resolution module
Queue or scheduler for managing waiting requests
Logging for audit and debugging
Design Patterns
Pessimistic locking
Optimistic concurrency control
Two-phase locking
Deadlock detection algorithms
Wait-die and wound-wait schemes
Reference Architecture
  +-------------------+       +-------------------+       +-------------------+
  |   Client/Process  | <---> |   Lock Manager    | <---> | Resource Manager  |
  +-------------------+       +-------------------+       +-------------------+
           |                          |                           |
           |                          |                           |
           |                          v                           |
           |                 +-------------------+               |
           |                 | Deadlock Detector |               |
           |                 +-------------------+               |
           |                          |                           |
           +------------------------------------------------------+
Components
Client/Process
Any programming environment
Requests access to shared resources
Lock Manager
In-memory lock table
Grants and releases locks to control resource access
Resource Manager
Data store or in-memory state
Tracks current state and ownership of resources
Deadlock Detector
Graph-based detection algorithm
Detects cycles in wait-for graph and triggers resolution
Request Flow
1. Client requests access to a resource from Lock Manager
2. Lock Manager checks if resource is free or locked
3. If free, Lock Manager grants lock and updates Resource Manager
4. If locked, Client is queued and waits
5. Deadlock Detector periodically checks for cycles in waiting clients
6. If deadlock detected, Deadlock Detector selects victim and releases locks
7. Client releases lock after work is done, Lock Manager updates Resource Manager
8. Next waiting client is granted lock
Database Schema
Entities: - Resource: id, state, owner_lock_id - Lock: id, resource_id, client_id, lock_type (read/write), status - Client: id, metadata - WaitForGraph: edges representing waiting relationships between clients Relationships: - One Resource can have multiple Locks (shared or exclusive) - Locks belong to one Client - WaitForGraph edges connect Clients waiting for Locks held by others
Scaling Discussion
Bottlenecks
Lock Manager becomes a single point of contention under high concurrency
Deadlock detection overhead increases with number of clients and locks
Resource Manager state updates may slow down with many resources
Queue length grows causing increased wait times
Solutions
Partition Lock Manager by resource groups to distribute load
Use incremental or on-demand deadlock detection to reduce overhead
Cache resource states and batch updates to Resource Manager
Implement priority queues and timeout mechanisms to reduce wait times
Interview Tips
Time: Spend 10 minutes understanding concurrency challenges and clarifying requirements, 20 minutes designing components and data flow, 10 minutes discussing scaling and trade-offs, 5 minutes summarizing.
Explain why concurrency control is needed to prevent data corruption
Discuss pros and cons of pessimistic vs optimistic locking
Describe how deadlocks occur and how to detect and resolve them
Show understanding of scalability challenges and partitioning
Mention importance of balancing performance and correctness