Backup and restore in GCP - Time & Space Complexity
When backing up and restoring data in the cloud, it's important to know how the time needed changes as the data size grows.
We want to understand how the number of operations or API calls increases when we handle more data.
Analyze the time complexity of the following operation sequence.
// Backup and restore sequence in GCP
const backup = await gcp.storage.createBackup(bucketName, backupName);
const files = await gcp.storage.listFiles(bucketName);
for (const file of files) {
await gcp.storage.copyFile(file, backupName);
}
await gcp.storage.restoreBackup(backupName, bucketName);
This sequence creates a backup, copies each file to the backup, then restores the backup to the original bucket.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Copying each file to the backup storage.
- How many times: Once for every file in the bucket.
As the number of files grows, the number of copy operations grows at the same rate.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | About 10 copy operations |
| 100 | About 100 copy operations |
| 1000 | About 1000 copy operations |
Pattern observation: The operations increase directly with the number of files.
Time Complexity: O(n)
This means the time grows in direct proportion to the number of files being backed up or restored.
[X] Wrong: "Backing up many files takes the same time as backing up one file because it's just one backup operation."
[OK] Correct: Each file must be copied individually, so more files mean more copy operations and more time.
Understanding how backup and restore operations scale helps you design efficient cloud solutions and shows you can think about real-world system behavior.
"What if we changed the backup to copy files in parallel instead of one by one? How would the time complexity change?"