Trigger execution order in PostgreSQL - Time & Space Complexity
When multiple triggers run on the same table event, the order they execute affects performance.
We want to understand how the number of triggers impacts execution time.
Analyze the time complexity of this trigger setup.
CREATE TABLE orders (id SERIAL PRIMARY KEY, amount INT);
CREATE FUNCTION trg_before_insert() RETURNS trigger AS $$
BEGIN
-- some logic
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER before_insert_1 BEFORE INSERT ON orders
FOR EACH ROW EXECUTE FUNCTION trg_before_insert();
CREATE TRIGGER before_insert_2 BEFORE INSERT ON orders
FOR EACH ROW EXECUTE FUNCTION trg_before_insert();
-- Assume multiple triggers like this are defined
This code creates multiple BEFORE INSERT triggers on the same table, each running some logic for every new row.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Each trigger function runs once per inserted row.
- How many times: Number of triggers x number of inserted rows.
As you add more triggers or insert more rows, the total work grows.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 rows, 2 triggers | 20 trigger executions |
| 100 rows, 2 triggers | 200 trigger executions |
| 1000 rows, 2 triggers | 2000 trigger executions |
Pattern observation: Total executions grow proportionally with rows and triggers.
Time Complexity: O(t x n)
This means the time grows linearly with both the number of triggers (t) and the number of rows inserted (n).
[X] Wrong: "Adding more triggers won't affect performance much because they run quickly."
[OK] Correct: Each trigger runs for every row, so more triggers multiply the work and slow down inserts noticeably.
Understanding how trigger execution scales helps you design efficient database logic and avoid slowdowns in real projects.
What if triggers were defined as FOR EACH STATEMENT instead of FOR EACH ROW? How would the time complexity change?