Cursor declaration and usage in PostgreSQL - Time & Space Complexity
When using cursors in PostgreSQL, it's important to understand how the time to process data grows as the amount of data increases.
We want to know how the number of steps changes when we fetch rows one by one using a cursor.
Analyze the time complexity of the following cursor usage in PostgreSQL.
DECLARE my_cursor CURSOR FOR
SELECT id, name FROM employees;
OPEN my_cursor;
LOOP
FETCH my_cursor INTO emp_id, emp_name;
EXIT WHEN NOT FOUND;
-- process each employee row here
END LOOP;
CLOSE my_cursor;
This code declares a cursor to select all employees, then fetches and processes each row one at a time.
Look for repeated actions in the code.
- Primary operation: Fetching one row from the cursor inside the loop.
- How many times: Once for each row in the result set.
As the number of rows grows, the number of fetch operations grows the same way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 fetches |
| 100 | 100 fetches |
| 1000 | 1000 fetches |
Pattern observation: The work grows directly with the number of rows; doubling rows doubles the fetches.
Time Complexity: O(n)
This means the time to process grows linearly with the number of rows fetched by the cursor.
[X] Wrong: "Using a cursor makes the query run faster because it processes rows one by one."
[OK] Correct: The cursor still processes every row; it just controls how rows are fetched. The total work depends on the number of rows, not on using a cursor.
Understanding how cursors work and their time cost helps you explain data processing choices clearly and confidently in real projects.
"What if we fetched multiple rows at once instead of one by one? How would the time complexity change?"