Scan pagination in DynamoDB - Time & Space Complexity
When we use scan pagination in DynamoDB, we want to understand how the time to get data grows as we ask for more pages.
We ask: How does the work increase when we read more pages of data?
Analyze the time complexity of the following code snippet.
let lastEvaluatedKey = null;
do {
const params = { TableName: "MyTable", Limit: 100 };
if (lastEvaluatedKey) {
params.ExclusiveStartKey = lastEvaluatedKey;
}
const data = await dynamodb.scan(params).promise();
lastEvaluatedKey = data.LastEvaluatedKey;
// process data.Items
} while (lastEvaluatedKey);
This code reads a DynamoDB table in pages of 100 items until all items are read.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning a page of items from the table.
- How many times: Once per page, until all items are read.
Each page reads a fixed number of items (100). As the total items grow, the number of pages grows roughly proportionally.
| Input Size (n) | Approx. Operations (pages) |
|---|---|
| 10 | 1 (less than one full page) |
| 100 | 1 page |
| 1000 | 10 pages |
Pattern observation: The number of scan calls grows linearly with the total items.
Time Complexity: O(n)
This means the time to scan all items grows directly with how many items are in the table.
[X] Wrong: "Scan pagination time stays the same no matter how many items are in the table."
[OK] Correct: Each page reads a fixed number of items, so more items mean more pages and more scan calls, increasing total time.
Understanding how scan pagination scales helps you explain how to handle large datasets efficiently and shows you know how to reason about costs in real systems.
"What if we changed the page size from 100 to 1000? How would the time complexity change?"