Unprocessed items handling in DynamoDB - Time & Space Complexity
When working with DynamoDB batch operations, some items may not process immediately and need retrying.
We want to understand how the time to finish depends on how many items are unprocessed.
Analyze the time complexity of the following code snippet.
let unprocessedItems = initialBatch;
while (unprocessedItems && Object.keys(unprocessedItems).length > 0) {
const response = await dynamodb.batchWrite({ RequestItems: unprocessedItems });
unprocessedItems = response.UnprocessedItems || {};
}
This code sends a batch write request and retries only the items that were not processed until all are done.
- Primary operation: The while loop that retries unprocessed items.
- How many times: It repeats until no unprocessed items remain, which depends on how many fail each time.
Each batch can process many items at once, but some may fail and need retry.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 1 to a few retries |
| 100 | 1 to several retries |
| 1000 | Multiple retries, depending on failures |
Pattern observation: The total retries grow roughly with how many items remain unprocessed each time, so more unprocessed items mean more retries.
Time Complexity: O(k)
This means the time grows linearly with the number of retry rounds needed to process all unprocessed items.
[X] Wrong: "All items are processed in one batch call, so time is always constant."
[OK] Correct: Some items may fail and need retries, so the total time depends on how many unprocessed items appear and how many retries happen.
Understanding how retries affect time helps you explain real-world batch processing behavior clearly and confidently.
"What if the batch size limit changes? How would the time complexity change?"