Looping over files and directories in Bash Scripting - Time & Space Complexity
When we loop over files and directories in a script, we want to know how the time it takes grows as the number of files grows.
We ask: How does the script's work increase when there are more files?
Analyze the time complexity of the following code snippet.
for file in /path/to/directory/*; do
if [ -f "$file" ]; then
echo "Processing $file"
fi
done
This script loops over each item in a directory and prints a message for each file found.
- Primary operation: Looping over each file and checking if it is a regular file.
- How many times: Once for every item in the directory.
As the number of files increases, the script does more checks and prints more messages.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks and prints |
| 100 | About 100 checks and prints |
| 1000 | About 1000 checks and prints |
Pattern observation: The work grows directly with the number of files.
Time Complexity: O(n)
This means the time to finish grows in a straight line as the number of files grows.
[X] Wrong: "The script runs in the same time no matter how many files there are."
[OK] Correct: Each file adds more work because the loop runs once per file, so more files mean more time.
Understanding how loops over files scale helps you write scripts that handle many files efficiently and shows you can think about script performance clearly.
"What if we added a nested loop inside to process each file's lines? How would the time complexity change?"