DynamoDB capacity modes (on-demand, provisioned) in AWS - Time & Space Complexity
When using DynamoDB, how fast your requests get handled depends on the capacity mode you choose.
We want to understand how the number of requests affects the work DynamoDB does in each mode.
Analyze the time complexity of handling multiple read requests in different capacity modes.
// Provisioned mode example
aws dynamodb put-item --table-name MyTable --item '{"ID": {"S": "1"}}'
aws dynamodb update-table --table-name MyTable --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
// On-demand mode example
aws dynamodb put-item --table-name MyTable --item '{"ID": {"S": "2"}}'
// No capacity units set, DynamoDB scales automatically
This shows putting items into a table with provisioned and on-demand capacity modes.
Look at what happens when many requests come in.
- Primary operation: Handling each read or write request.
- How many times: Once per request, repeated for every request.
- Dominant factor: The number of requests determines how many operations DynamoDB must process.
As you send more requests, DynamoDB processes more operations.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 requests handled |
| 100 | 100 requests handled |
| 1000 | 1000 requests handled |
Each request adds one more operation, so the work grows steadily with the number of requests.
Time Complexity: O(n)
This means the time to handle requests grows directly with how many requests you send.
[X] Wrong: "On-demand mode handles any number of requests instantly without delay."
[OK] Correct: Even on-demand mode processes each request one by one, so more requests still mean more work and time.
Understanding how request volume affects DynamoDB helps you design systems that stay fast and reliable as they grow.
"What if we batch multiple requests together? How would the time complexity change?"