Consistent vs eventually consistent reads in DynamoDB - Performance Comparison
When reading data from DynamoDB, the way consistency is handled affects how long the read takes.
We want to understand how the time to get data changes depending on the read type.
Analyze the time complexity of these two read types in DynamoDB.
// Eventually consistent read
const params = {
TableName: "MyTable",
Key: { id: "123" },
ConsistentRead: false
};
const data = await dynamodb.get(params).promise();
// Strongly consistent read
const paramsStrong = {
TableName: "MyTable",
Key: { id: "123" },
ConsistentRead: true
};
const dataStrong = await dynamodb.get(paramsStrong).promise();
This code fetches an item by key, once with eventual consistency and once with strong consistency.
Look at what happens behind the scenes when reading data.
- Primary operation: Single key lookup in the database.
- How many times: Exactly once per read request.
Since each read fetches one item by key, the time depends mostly on network and consistency checks, not on data size.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 1 single lookup |
| 100 | 1 single lookup |
| 1000 | 1 single lookup |
Pattern observation: The number of operations is constant (1 lookup per read), regardless of input size, confirming O(1).
Time Complexity: O(1)
This means each read operation takes about the same time no matter the data size.
[X] Wrong: "Strongly consistent reads always take much longer because they scan the whole table."
[OK] Correct: Both read types fetch by key, so they do not scan the table; strong consistency adds a small check but does not scan all data.
Understanding how read consistency affects response time helps you explain trade-offs clearly and shows you know how databases handle data freshness and speed.
"What if we changed from single item reads to scanning the whole table? How would the time complexity change?"