HTTP trigger functions in Firebase - Time & Space Complexity
When using HTTP trigger functions in Firebase, it's important to understand how the time to handle requests grows as more data or users interact with the function.
We want to know how the function's execution time changes when it processes different amounts of data or requests.
Analyze the time complexity of the following Firebase HTTP trigger function.
exports.getUsers = functions.https.onRequest(async (req, res) => {
const snapshot = await admin.firestore().collection('users').get();
const users = [];
snapshot.forEach(doc => {
users.push(doc.data());
});
res.send(users);
});
This function fetches all documents from the 'users' collection and sends them back in the response.
Look at what repeats as the input grows.
- Primary operation: Looping through each user document in the collection.
- How many times: Once for every user document stored in the database.
As the number of users increases, the function spends more time reading and processing each user.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 document reads and pushes |
| 100 | 100 document reads and pushes |
| 1000 | 1000 document reads and pushes |
Pattern observation: The work grows directly with the number of users; doubling users doubles the work.
Time Complexity: O(n)
This means the time to complete the function grows in a straight line with the number of user documents.
[X] Wrong: "The function runs in constant time because it just sends a response once."
[OK] Correct: Even though the response is sent once, the function must first read and process each user document, so the time depends on how many users there are.
Understanding how your function's time grows with data size shows you can write efficient backend code that scales well as your app grows.
What if we changed the function to only fetch users with a specific property instead of all users? How would the time complexity change?