Performing operations on cursors in PostgreSQL - Time & Space Complexity
When working with cursors in PostgreSQL, it's important to understand how the time to process data grows as the amount of data increases.
We want to know how the number of operations changes when we fetch rows using a cursor.
Analyze the time complexity of the following cursor operations.
DECLARE my_cursor CURSOR FOR
SELECT * FROM large_table;
OPEN my_cursor;
FETCH NEXT FROM my_cursor;
-- repeat FETCH until no more rows
CLOSE my_cursor;
This code declares a cursor to select all rows from a large table, opens it, fetches rows one by one, and then closes it.
- Primary operation: Fetching each row from the cursor one at a time.
- How many times: Once for every row in the result set (n times, where n is the number of rows).
Each fetch operation processes one row, so the total work grows directly with the number of rows.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 fetches |
| 100 | 100 fetches |
| 1000 | 1000 fetches |
Pattern observation: The number of operations increases in a straight line as the number of rows increases.
Time Complexity: O(n)
This means the time to fetch all rows grows directly in proportion to the number of rows you have.
[X] Wrong: "Fetching rows with a cursor is always fast and constant time regardless of data size."
[OK] Correct: Each fetch reads one row, so more rows mean more fetches and more time. The time grows with the number of rows.
Understanding how cursor operations scale helps you explain how database queries behave with large data, a useful skill in real projects and interviews.
"What if we fetched multiple rows at once instead of one by one? How would the time complexity change?"