User-defined functions in SQL - Time & Space Complexity
When we use user-defined functions in SQL, it is important to know how their execution time changes as the data grows.
We want to understand how the function's running time scales with input size.
Analyze the time complexity of the following SQL user-defined function and its usage.
CREATE FUNCTION GetDiscount(price DECIMAL) RETURNS DECIMAL LANGUAGE plpgsql AS $$
BEGIN
IF price > 100 THEN
RETURN price * 0.1;
ELSE
RETURN price * 0.05;
END IF;
END;
$$;
SELECT product_id, GetDiscount(price) FROM products;
This function calculates a discount based on price and is called for each product in the products table.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The function GetDiscount is called once for each row in the products table.
- How many times: The number of times equals the number of rows in products, say n.
As the number of products grows, the total work grows proportionally because the function runs once per product.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 function calls |
| 100 | 100 function calls |
| 1000 | 1000 function calls |
Pattern observation: The total execution time grows linearly with the number of rows.
Time Complexity: O(n)
This means the total time grows directly in proportion to the number of rows processed.
[X] Wrong: "The function runs just once, so its time does not depend on the number of rows."
[OK] Correct: The function is called for each row, so total time adds up with more rows.
Understanding how user-defined functions affect query time helps you write efficient database code and explain performance clearly.
"What if the function called another function inside it that also runs in constant time? How would the overall time complexity change?"