Index capacity and cost in DynamoDB - Time & Space Complexity
When using indexes in DynamoDB, it's important to understand how the cost and capacity scale as data grows.
We want to know how adding more data affects the work DynamoDB does when using indexes.
Analyze the time complexity of the following DynamoDB query using a Global Secondary Index (GSI).
const params = {
TableName: "Orders",
IndexName: "CustomerIndex",
KeyConditionExpression: "CustomerId = :cid",
ExpressionAttributeValues: {
":cid": { S: "12345" }
}
};
const result = await dynamodb.query(params).promise();
This code queries the "Orders" table using a GSI on CustomerId to find all orders for a customer.
Look for repeated work done by DynamoDB when processing the query.
- Primary operation: Reading index entries matching CustomerId.
- How many times: Once per matching item in the index.
As the number of orders for a customer grows, DynamoDB must read more index entries.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Reads 10 index entries |
| 100 | Reads 100 index entries |
| 1000 | Reads 1000 index entries |
Pattern observation: The work grows directly with the number of matching items.
Time Complexity: O(n)
This means the time to query grows linearly with the number of matching items in the index.
[X] Wrong: "Using an index makes queries always super fast regardless of data size."
[OK] Correct: Even with indexes, DynamoDB reads each matching item, so more matches mean more work.
Understanding how index queries scale helps you design efficient data access patterns and explain your choices clearly.
"What if we added a filter expression after the query? How would that affect the time complexity?"