Failing fast principle in Jenkins - Time & Space Complexity
When using the failing fast principle in Jenkins pipelines, we want to know how quickly the system stops when an error occurs.
We ask: how does the time spent change as the number of steps grows?
Analyze the time complexity of the following Jenkins pipeline snippet.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
for (int i = 0; i < stepsList.size(); i++) {
if (!stepsList[i].run()) {
error('Step failed, stopping pipeline')
}
}
}
}
}
}
}
This code runs a list of steps one by one and stops immediately if any step fails.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Loop over stepsList to run each step.
- How many times: Up to the number of steps in stepsList, but may stop early on failure.
The pipeline runs steps one by one until a failure or all steps complete.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Up to 10 steps run, but may stop sooner if a failure occurs. |
| 100 | Up to 100 steps run, stopping early on failure. |
| 1000 | Up to 1000 steps run, stopping early on failure. |
Pattern observation: The time grows linearly with the number of steps but can be less if a failure happens early.
Time Complexity: O(n)
This means the time to run grows roughly in direct proportion to the number of steps, but can be shorter if a failure stops the process early.
[X] Wrong: "The pipeline always runs all steps no matter what."
[OK] Correct: Because of failing fast, the pipeline stops as soon as a step fails, so it may run fewer steps than the total.
Understanding how failing fast affects time helps you explain efficient pipeline design and error handling in real projects.
"What if we changed the pipeline to run all steps even if some fail? How would the time complexity change?"