Why scripts often process text in Bash Scripting - Performance Analysis
Scripts often handle text data like logs or configuration files. Understanding how time grows when processing text helps us write faster scripts.
We want to know how the script's work changes as the text size grows.
Analyze the time complexity of the following code snippet.
#!/bin/bash
filename="data.txt"
while IFS= read -r line; do
echo "$line" | grep -q "error"
if [ $? -eq 0 ]; then
echo "Found error: $line"
fi
done < "$filename"
This script reads a text file line by line and checks if each line contains the word "error".
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Reading each line and searching for a word using grep.
- How many times: Once for every line in the file.
As the number of lines grows, the script checks each line once.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 line checks |
| 100 | 100 line checks |
| 1000 | 1000 line checks |
Pattern observation: The work grows directly with the number of lines.
Time Complexity: O(n)
This means the script's work grows in a straight line as the text file gets bigger.
[X] Wrong: "The script only runs once, so size doesn't matter."
[OK] Correct: The script processes every line, so more lines mean more work and longer time.
Knowing how text processing scales helps you explain script performance clearly and shows you understand real-world scripting challenges.
"What if the script searched for multiple words per line? How would the time complexity change?"