Seed data management in Supabase - Time & Space Complexity
When adding initial data to a database, it is important to know how the time to complete this task grows as the amount of data grows.
We want to understand how the number of operations changes when we increase the seed data size.
Analyze the time complexity of the following operation sequence.
// Insert multiple seed records into a table
const seedData = [
{ name: 'Alice', age: 30 },
{ name: 'Bob', age: 25 },
// ... more records
];
const { data, error } = await supabase
.from('users')
.insert(seedData);
This code inserts a list of user records into the database all at once.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: One API call to insert all seed records in bulk.
- How many times: Exactly one call regardless of the number of records.
As the number of seed records grows, the single insert call handles more data, but the number of calls stays the same.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 1 call with 10 records |
| 100 | 1 call with 100 records |
| 1000 | 1 call with 1000 records |
Pattern observation: The number of API calls does not increase with more data; it stays constant.
Time Complexity: O(1)
This means the number of API calls stays the same no matter how many records you insert at once.
[X] Wrong: "Inserting more seed records means more API calls and longer time linearly."
[OK] Correct: Because the insert is done in bulk with one call, the number of calls does not grow with data size.
Understanding how bulk operations affect time complexity helps you design efficient data loading processes and shows you can think about scaling in real projects.
"What if we inserted each seed record with a separate API call? How would the time complexity change?"