Why DynamoDB exists - Performance Analysis
We want to understand why DynamoDB was created by looking at how it handles growing amounts of data and requests.
What question are we trying to answer? How does DynamoDB keep things fast even when data grows large?
Analyze the time complexity of a simple DynamoDB query operation.
const params = {
TableName: "Users",
KeyConditionExpression: "UserId = :id",
ExpressionAttributeValues: {
":id": { S: "123" }
}
};
const result = await dynamodb.query(params).promise();
This code fetches all items for a specific user ID from the Users table.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: DynamoDB looks up items by key using an index.
- How many times: It accesses only the matching items, not the whole table.
When you ask for data by key, DynamoDB finds it quickly no matter how big the table is.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 lookups for 10 items |
| 100 | About 10 lookups for 10 items (still fast) |
| 1000 | About 10 lookups for 10 items (still fast) |
Pattern observation: The number of operations depends mostly on how many items you ask for, not the total table size.
Time Complexity: O(k)
This means DynamoDB can find your data in time proportional to the number of matching items, not the total database size.
[X] Wrong: "DynamoDB slows down as the table gets bigger because it scans all data every time."
[OK] Correct: DynamoDB uses indexes to jump directly to the data you want, so it does not scan the whole table.
Understanding why DynamoDB is designed for fast lookups helps you explain how databases handle big data efficiently in real projects.
"What if we changed the query to scan the whole table instead of using a key? How would the time complexity change?"