Backup and recovery strategies in Raspberry Pi - Time & Space Complexity
When working with backup and recovery on a Raspberry Pi, it is important to understand how the time needed grows as the amount of data increases.
We want to know how long the backup or recovery process will take when the data size changes.
Analyze the time complexity of the following backup code snippet.
files = get_all_files('/home/pi/data')
backup = []
for file in files:
data = read_file(file)
backup.append(data)
save_backup(backup)
This code collects all files from a folder, reads each file, and saves the backup data.
Look for loops or repeated steps in the code.
- Primary operation: Reading each file one by one.
- How many times: Once for every file in the folder.
As the number of files grows, the time to read and backup grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 files | 10 reads and 1 save |
| 100 files | 100 reads and 1 save |
| 1000 files | 1000 reads and 1 save |
Pattern observation: The time grows directly with the number of files; doubling files doubles the work.
Time Complexity: O(n)
This means the backup time grows in a straight line with the number of files to process.
[X] Wrong: "Backing up a few extra files won't affect the time much."
[OK] Correct: Each file adds more work, so even a few extra files increase the total time noticeably.
Understanding how backup time grows helps you design better systems and explain your choices clearly in real projects.
"What if we only back up files that changed since the last backup? How would the time complexity change?"