post-merge hook in Git - Time & Space Complexity
We want to understand how the time taken by a git post-merge hook changes as the project size grows.
Specifically, how does the hook's work increase when more files or changes are involved?
Analyze the time complexity of this simple post-merge hook script.
#!/bin/sh
# post-merge hook example
changed_files=$(git diff-tree -r --name-only --no-commit-id ORIG_HEAD HEAD)
for file in $changed_files; do
echo "Processing $file"
# simulate some work per file
./process-file.sh "$file"
done
This script runs after a merge and processes each changed file one by one.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Loop over each changed file to run a processing command.
- How many times: Once for every changed file in the merge.
The time grows with the number of changed files because the script processes each file separately.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 changed files | 10 processing runs |
| 100 changed files | 100 processing runs |
| 1000 changed files | 1000 processing runs |
Pattern observation: The work increases directly with the number of changed files.
Time Complexity: O(n)
This means the time to finish the hook grows in a straight line as more files change.
[X] Wrong: "The hook runs in constant time no matter how many files changed."
[OK] Correct: The hook processes each changed file one by one, so more files mean more work and more time.
Understanding how hooks scale helps you write efficient scripts that keep your project fast and smooth as it grows.
"What if the hook processed files in parallel instead of one by one? How would the time complexity change?"