Creating custom hook scripts in Git - Performance & Efficiency
When we create custom hook scripts in Git, we want to know how the time to run these scripts changes as the project grows.
We ask: How does running a hook script scale when the repository size or number of files increases?
Analyze the time complexity of this simple pre-commit hook script.
#!/bin/sh
# pre-commit hook
# Check all staged files for TODO comments
files=$(git diff --cached --name-only)
for file in $files; do
grep -q "TODO" "$file" && echo "TODO found in $file" && exit 1
done
exit 0
This script checks each staged file for the word "TODO" and stops the commit if any are found.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping over each staged file and running a search inside it.
- How many times: Once for each staged file, so the number of files affects the count.
As the number of staged files grows, the script checks more files one by one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 files | 10 file checks |
| 100 files | 100 file checks |
| 1000 files | 1000 file checks |
Pattern observation: The work grows directly with the number of files checked.
Time Complexity: O(n)
This means the time to run the hook grows linearly with the number of staged files.
[X] Wrong: "The hook runs instantly no matter how many files are staged."
[OK] Correct: Each file adds work because the script checks them one by one, so more files mean more time.
Understanding how hook scripts scale helps you write efficient checks that keep your workflow smooth as projects grow.
"What if the hook script also searched inside each file line by line? How would that affect the time complexity?"