Transaction vs batch comparison in DynamoDB - Performance Comparison
When working with DynamoDB, it's important to know how the time it takes to run transactions or batch operations changes as you add more items.
We want to understand how the cost grows when using transactions versus batch writes.
Analyze the time complexity of these DynamoDB operations.
// Transaction example
TransactWriteItems({
TransactItems: [
{ Put: { TableName: 'MyTable', Item: {...} } },
{ Update: { TableName: 'MyTable', Key: {...}, UpdateExpression: '...' } },
// more operations
]
})
// Batch write example
BatchWriteItem({
RequestItems: {
'MyTable': [
{ PutRequest: { Item: {...} } },
{ DeleteRequest: { Key: {...} } },
// more items
]
}
})
The transaction runs multiple operations atomically, while the batch writes multiple items in one call without atomicity.
Look at what repeats as we add more items.
- Primary operation: Each item write or update inside the transaction or batch.
- How many times: Number of items in the transaction or batch.
As you add more items, the total work grows roughly in a straight line.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 item operations |
| 100 | About 100 item operations |
| 1000 | About 1000 item operations |
Pattern observation: The time grows directly with the number of items processed.
Time Complexity: O(n)
This means the time to complete the operation grows in direct proportion to how many items you include.
[X] Wrong: "Transactions are always slower than batch writes because they do more work."
[OK] Correct: Both scale linearly with item count, but transactions add some overhead for atomicity, not a different growth pattern.
Understanding how these operations scale helps you explain trade-offs clearly and shows you know how to reason about performance in real systems.
"What if we split a large batch into multiple smaller batches? How would that affect the time complexity?"