Caching strategies in Supabase - Time & Space Complexity
When using caching with Supabase, we want to know how the number of data fetches changes as requests grow.
We ask: How does caching affect the number of times we call the database?
Analyze the time complexity of fetching user profiles with caching.
// Check if user profile is in cache
const cachedProfile = cache.get(userId);
if (cachedProfile) {
return cachedProfile; // Use cached data
}
// If not cached, fetch from Supabase
const { data, error } = await supabase
.from('profiles')
.select('*')
.eq('id', userId)
.single();
cache.set(userId, data); // Store in cache
return data;
This code tries to get a user profile from cache first, then fetches from Supabase if missing, and stores it in cache.
Look at what happens when many user profiles are requested.
- Primary operation: Database fetch calls to Supabase when cache misses occur.
- How many times: Once per unique user profile not found in cache.
As more unique user profiles are requested, the number of database calls grows with the number of misses.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | Up to 10 database calls if none cached |
| 100 | Up to 100 database calls if none cached |
| 1000 | Up to 1000 database calls if none cached |
Pattern observation: Without cache hits, calls grow linearly with unique requests; with cache hits, calls reduce greatly.
Time Complexity: O(n)
This means the number of database calls grows directly with the number of unique user profiles requested when cache misses happen.
[X] Wrong: "Caching makes the number of database calls constant no matter how many users we request."
[OK] Correct: Cache only helps if data is already stored; new unique requests still cause database calls.
Understanding how caching affects database calls shows you can think about efficiency and user experience in real apps.
"What if we changed the cache to expire every minute? How would the time complexity change when many requests come in quickly?"