Lock files for single instance in Bash Scripting - Time & Space Complexity
When using lock files in bash scripts to allow only one instance, we want to know how the script's running time changes as input or usage grows.
We ask: How does the locking process affect the script's speed when many attempts happen?
Analyze the time complexity of the following code snippet.
lockfile=/tmp/myscript.lock
if ( set -o noclobber; echo "$$" > "$lockfile" ) 2>/dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
# Critical section: script work here
sleep 5
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Another instance is running. Exiting."
exit 1
fi
This script tries to create a lock file to ensure only one instance runs at a time. If the lock exists, it exits immediately.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Attempt to create a lock file once per script run.
- How many times: Exactly once per script start; no loops or retries inside this snippet.
Since the script tries to create the lock file only once, the time spent on locking does not grow with input size.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 1 lock attempt |
| 100 | 1 lock attempt |
| 1000 | 1 lock attempt |
Pattern observation: The locking step happens once per run, so it stays constant regardless of input size.
Time Complexity: O(1)
This means the locking operation takes the same amount of time no matter how many times the script runs or how big the input is.
[X] Wrong: "Locking takes longer as more scripts try to run at once because it loops to wait."
[OK] Correct: This script does not retry or loop; it tries once and exits if locked, so locking time stays constant.
Understanding how locking affects script timing helps you write safe scripts that avoid conflicts without slowing down as usage grows.
What if the script retried creating the lock file multiple times with a delay? How would the time complexity change?