Why serverless functions matter in GCP - Performance Analysis
We want to understand how the time it takes to run serverless functions changes as we ask them to do more work.
Specifically, how does the number of function calls grow when we increase the tasks?
Analyze the time complexity of the following operation sequence.
// Deploy a serverless function
const myFunction = gcp.functions.create({
name: 'processData',
runtime: 'nodejs18',
trigger: 'http',
entryPoint: 'handleRequest'
});
// Invoke the function multiple times
for (let i = 0; i < n; i++) {
myFunction.invoke({data: input[i]});
}
This sequence deploys a serverless function and then calls it once for each item in the input list.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Invoking the serverless function for each input item.
- How many times: Once per input item, so n times.
Each new input item causes one more function call, so the total calls grow directly with input size.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 function calls |
| 100 | 100 function calls |
| 1000 | 1000 function calls |
Pattern observation: The number of calls grows in a straight line as input grows.
Time Complexity: O(n)
This means the time to complete all function calls grows directly with the number of inputs.
[X] Wrong: "Calling one function once can handle all inputs instantly, so time stays the same no matter how many inputs."
[OK] Correct: Each input needs its own function call, so time grows as inputs grow, not fixed.
Understanding how serverless function calls scale helps you explain real cloud workloads clearly and confidently.
"What if the function could process multiple inputs in one call? How would the time complexity change?"