Lambda function with DynamoDB - Time & Space Complexity
When using a Lambda function to interact with DynamoDB, it's important to understand how the time it takes to run changes as the data grows.
We want to know how the number of operations grows when the database or input size increases.
Analyze the time complexity of the following code snippet.
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
const params = {
TableName: 'Users',
Key: { userId: event.userId }
};
const data = await dynamodb.get(params).promise();
return data.Item;
};
This Lambda function fetches a single user record from DynamoDB by its userId key.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: A single DynamoDB GetItem request to fetch one record.
- How many times: Exactly once per Lambda invocation.
Since the function fetches one item by key, the time does not increase with the total number of items in the table.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 1 Get request |
| 100 | 1 Get request |
| 1000 | 1 Get request |
Pattern observation: The number of operations stays the same no matter how many records are in the table.
Time Complexity: O(1)
This means the time to get one item stays constant, no matter how big the database grows.
[X] Wrong: "Getting an item from DynamoDB takes longer as the table gets bigger."
[OK] Correct: DynamoDB uses keys to directly find items, so fetching one item by key is fast and does not slow down with more data.
Understanding how database calls scale helps you write efficient serverless functions and shows you know how to handle data growth gracefully.
"What if the Lambda function scanned the entire table instead of getting by key? How would the time complexity change?"