Why knowing how to undo matters in Git - Performance Analysis
When using git, undoing changes is common. Understanding how undo commands work helps us see how their cost grows as changes increase.
We want to know: how does the effort to undo changes grow when there are more changes?
Analyze the time complexity of undoing changes with git reset.
# Undo last commit but keep changes staged
git reset --soft HEAD~1
# Undo last commit and unstage changes
git reset HEAD~1
# Undo last commit and discard changes
git reset --hard HEAD~1
This snippet shows different ways to undo the last commit in git.
Look for operations that repeat or scale with input size.
- Primary operation: Git traverses commit history and updates file states.
- How many times: Depends on number of files and changes in the commit being undone.
Undoing a commit involves resetting files to a previous state. The more files changed, the more work git does.
| Input Size (changed files) | Approx. Operations |
|---|---|
| 10 | About 10 file resets |
| 100 | About 100 file resets |
| 1000 | About 1000 file resets |
Pattern observation: The work grows roughly in direct proportion to the number of changed files.
Time Complexity: O(n)
This means undoing a commit takes time proportional to how many files were changed in that commit.
[X] Wrong: "Undoing a commit is always instant no matter how many files changed."
[OK] Correct: Git must update each changed file to its previous state, so more changed files mean more work and longer undo time.
Knowing how undo operations scale helps you explain git behavior clearly and shows you understand practical tool costs, a useful skill in real projects.
What if we changed git reset to git revert? How would the time complexity change?