CURSOR concept and usage in SQL - Time & Space Complexity
When using a cursor in SQL, we process rows one by one. Understanding how this affects time helps us see how the work grows as data grows.
We want to know how the time to run the cursor changes when the number of rows increases.
Analyze the time complexity of this cursor usage example.
DECLARE my_cursor CURSOR FOR
SELECT id, name FROM employees;
OPEN my_cursor;
FETCH NEXT FROM my_cursor INTO @id, @name;
WHILE @@FETCH_STATUS = 0
BEGIN
-- Process each row
FETCH NEXT FROM my_cursor INTO @id, @name;
END
CLOSE my_cursor;
DEALLOCATE my_cursor;
This code fetches and processes each employee row one at a time using a cursor.
Look at what repeats as the cursor runs.
- Primary operation: Fetching and processing each row one by one inside the WHILE loop.
- How many times: Once for every row in the employees table.
As the number of rows grows, the cursor does more fetches and processing steps.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 fetch and process steps |
| 100 | About 100 fetch and process steps |
| 1000 | About 1000 fetch and process steps |
Pattern observation: The work grows directly with the number of rows; double the rows, double the work.
Time Complexity: O(n)
This means the time to run the cursor grows in a straight line with the number of rows processed.
[X] Wrong: "Using a cursor is always fast because it handles one row at a time."
[OK] Correct: Processing rows one by one means the total time grows with the number of rows, which can be slow for large data.
Knowing how cursors work and their time cost helps you explain choices in data processing clearly and confidently.
"What if we replaced the cursor with a set-based operation? How would the time complexity change?"