Why backup strategy prevents data loss in MySQL - Performance Analysis
We want to understand how the time it takes to back up data grows as the amount of data grows.
How does the backup process scale when the database gets bigger?
Analyze the time complexity of this backup command.
# Backup entire database to a file
mysqldump mydb > '/backup/mydb.bak'
This command copies all data from the database into a backup file.
Look at what repeats during the backup.
- Primary operation: Reading each row of every table in the database.
- How many times: Once for each row in the entire database.
As the number of rows grows, the backup takes longer because it reads more data.
| Input Size (rows) | Approx. Operations |
|---|---|
| 10,000 | 10,000 reads |
| 100,000 | 100,000 reads |
| 1,000,000 | 1,000,000 reads |
Pattern observation: The time grows directly with the number of rows; double the rows, double the work.
Time Complexity: O(n)
This means the backup time grows in a straight line with the amount of data.
[X] Wrong: "Backing up only a few tables is always faster regardless of their size."
[OK] Correct: Even a few tables with many rows take time because the backup reads every row; size matters more than table count.
Understanding how backup time grows helps you explain system behavior clearly and shows you think about real-world data handling.
"What if we only back up changed data instead of the whole database? How would the time complexity change?"