Consider the following DynamoDB operation using the AWS SDK:
const params = {
TableName: 'Users',
Key: { 'UserId': '123' },
UpdateExpression: 'SET Age = :newAge',
ConditionExpression: 'Age = :oldAge',
ExpressionAttributeValues: {
':newAge': 30,
':oldAge': 25
}
};
try {
await dynamodb.update(params).promise();
console.log('Update succeeded');
} catch (err) {
console.log('Update failed:', err.code);
}If the current Age of user 123 is 28, what will be printed?
Think about what happens when the condition in ConditionExpression is not met.
The update will fail because the condition 'Age = :oldAge' is false (28 != 25). DynamoDB throws a ConditionalCheckFailedException in this case.
When a DynamoDB query exceeds the provisioned throughput limits, what error code is returned by the service?
Look for the specific error related to throughput limits in DynamoDB.
DynamoDB returns ProvisionedThroughputExceededException when the request rate is too high for the provisioned capacity.
Given the following retry logic, which option correctly implements exponential backoff with jitter for retrying a DynamoDB operation?
async function retryOperation(operation, maxRetries) { let retries = 0; while (retries < maxRetries) { try { return await operation(); } catch (err) { if (err.code === 'ProvisionedThroughputExceededException') { const delay = Math.pow(2, retries) * 100 + Math.random() * 100; await new Promise(res => setTimeout(res, delay)); retries++; } else { throw err; } } } throw new Error('Max retries reached'); }
Check if the delay increases exponentially and includes randomness.
The code uses 2^retries * 100 ms plus a random 0-100 ms delay, which is exponential backoff with jitter. It retries only on ProvisionedThroughputExceededException.
Consider this retry loop for a DynamoDB operation:
let retries = 0;
while (retries < 5) {
try {
await dynamodb.putItem(params).promise();
break;
} catch (err) {
if (err.code === 'ProvisionedThroughputExceededException') {
retries++;
}
}
}Why might this loop never exit even after 5 retries?
Think about what happens if retries happen too fast.
Without delay, retries happen immediately, causing the same error repeatedly without giving DynamoDB time to recover, so the loop may appear stuck.
When retrying write operations in DynamoDB due to transient errors, what is the recommended approach to ensure idempotency and avoid duplicate writes?
Think about how to uniquely identify each write attempt.
Using a unique idempotency token in a condition expression ensures that retries do not create duplicate items or overwrite unintended data.