Resolver organization in GraphQL - Time & Space Complexity
When working with GraphQL, resolvers fetch data for each field requested. Understanding how the time to get data grows as requests get bigger helps us write faster APIs.
We want to know: how does the work done by resolvers change when the number of requested items increases?
Analyze the time complexity of the following resolver setup.
const resolvers = {
Query: {
users: () => fetchAllUsers(),
},
User: {
posts: (user) => fetchPostsByUserId(user.id),
},
};
This code fetches all users, then for each user fetches their posts separately.
Look for repeated work in the resolvers.
- Primary operation: Fetching posts for each user.
- How many times: Once per user, so if there are n users, this runs n times.
As the number of users grows, the number of post-fetching calls grows too.
| Input Size (n users) | Approx. Operations (post fetches) |
|---|---|
| 10 | 10 calls to fetch posts |
| 100 | 100 calls to fetch posts |
| 1000 | 1000 calls to fetch posts |
Pattern observation: The number of post fetches grows directly with the number of users.
Time Complexity: O(n)
This means the time to get all posts grows in a straight line as the number of users grows.
[X] Wrong: "Fetching posts inside each user resolver is just one operation regardless of user count."
[OK] Correct: Actually, the posts fetch runs once for every user, so the total work grows with the number of users.
Understanding how resolver calls multiply with data size helps you design efficient APIs and shows you can think about performance clearly.
"What if we batch fetch posts for all users in one call instead of one call per user? How would the time complexity change?"