Git mental model (snapshots not diffs) - Time & Space Complexity
We want to understand how Git handles data internally when you save changes.
Specifically, how the number of files affects Git's work when it stores snapshots.
Analyze the time complexity of this Git command sequence.
git add .
git commit -m "Save snapshot"
This code adds all current files to the staging area and then commits them as a snapshot.
Look for repeated work Git does when creating a snapshot.
- Primary operation: Git reads each file to create a snapshot of its content.
- How many times: Once per file in the project at commit time.
As the number of files grows, Git reads more files to build the snapshot.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 files | Reads 10 files |
| 100 files | Reads 100 files |
| 1000 files | Reads 1000 files |
Pattern observation: The work grows directly with the number of files.
Time Complexity: O(n)
This means Git's work to save a snapshot grows linearly with the number of files.
[X] Wrong: "Git stores only the changes, so commit time is always the same no matter how many files."
[OK] Correct: Git actually saves a snapshot of all files at commit time, so more files mean more work.
Understanding how Git handles snapshots helps you explain version control efficiency clearly and confidently.
"What if Git stored only diffs instead of snapshots? How would the time complexity change when committing many files?"