Cursor-based pagination in Rest API - Time & Space Complexity
When using cursor-based pagination in APIs, it's important to understand how the time to fetch data changes as the data grows.
We want to know how the number of operations grows when retrieving pages of data using a cursor.
Analyze the time complexity of the following code snippet.
GET /items?cursor=abc123&limit=10
// Server-side:
// 1. Find the position of the cursor in the dataset
// 2. Retrieve the next 10 items after the cursor
// 3. Return these items and a new cursor for the next page
This code fetches a page of items starting after a given cursor, returning a fixed number of items each time.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Retrieving a fixed number of items (limit) after the cursor position.
- How many times: Once per page request, fetching only the limited items.
Since each request fetches a fixed number of items, the work done stays about the same no matter how large the dataset is.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Fetch 10 items |
| 100 | Fetch 10 items |
| 1000 | Fetch 10 items |
Pattern observation: The number of operations stays constant because the page size is fixed.
Time Complexity: O(1)
This means fetching each page takes about the same time, regardless of how big the whole dataset is.
[X] Wrong: "Fetching later pages takes longer because the dataset is bigger."
[OK] Correct: Cursor-based pagination fetches a fixed number of items each time, so the time stays steady no matter the dataset size.
Understanding cursor-based pagination time complexity shows you can reason about efficient data fetching, a useful skill in many real-world API designs.
"What if the server had to scan from the start of the dataset to find the cursor each time? How would the time complexity change?"