Trigger for audit logging in PostgreSQL - Time & Space Complexity
We want to understand how the time to log changes grows as more data is changed in a table.
Specifically, how does using a trigger for audit logging affect performance as input size grows?
Analyze the time complexity of the following PostgreSQL trigger function and trigger.
CREATE OR REPLACE FUNCTION audit_log() RETURNS trigger AS $$
BEGIN
INSERT INTO audit_table(table_name, operation, changed_data, changed_at)
VALUES (TG_TABLE_NAME, TG_OP, row_to_json(OLD), now());
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER audit_trigger
AFTER INSERT OR UPDATE OR DELETE ON main_table
FOR EACH ROW EXECUTE FUNCTION audit_log();
This code logs every change made to main_table into audit_table using a trigger.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The trigger fires once for each row changed, inserting a log record.
- How many times: Once per row affected by the INSERT, UPDATE, or DELETE operation.
Each row change causes one insert into the audit table, so the work grows directly with the number of rows changed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 audit inserts |
| 100 | 100 audit inserts |
| 1000 | 1000 audit inserts |
Pattern observation: The number of audit inserts grows linearly with the number of rows changed.
Time Complexity: O(n)
This means the time to log changes grows directly in proportion to the number of rows changed.
[X] Wrong: "The trigger runs once per statement, so audit logging time is constant regardless of rows changed."
[OK] Correct: The trigger is defined FOR EACH ROW, so it runs once for every row changed, making the time grow with the number of rows.
Understanding how triggers affect performance helps you design efficient database auditing and maintain smooth application behavior.
What if we changed the trigger to FOR EACH STATEMENT instead of FOR EACH ROW? How would the time complexity change?