Why jobs are Jenkins core unit - Performance Analysis
We want to understand how Jenkins handles work as the number of jobs grows.
How does the time Jenkins takes change when more jobs run?
Analyze the time complexity of the following Jenkins pipeline job execution snippet.
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building...'
}
}
stage('Test') {
steps {
echo 'Testing...'
}
}
}
}
This pipeline defines a job with two stages that run sequentially: build and test.
Look at what repeats when Jenkins runs jobs.
- Primary operation: Executing each stage in the job sequentially.
- How many times: Once per stage, for each job run.
As the number of jobs increases, Jenkins runs more stages one after another.
| Input Size (number of jobs) | Approx. Operations (stages run) |
|---|---|
| 10 | 20 (2 stages x 10 jobs) |
| 100 | 200 (2 stages x 100 jobs) |
| 1000 | 2000 (2 stages x 1000 jobs) |
Pattern observation: The total work grows directly with the number of jobs.
Time Complexity: O(n)
This means the time Jenkins takes grows linearly as the number of jobs increases.
[X] Wrong: "Jenkins runs all jobs at the same time, so time stays the same no matter how many jobs there are."
[OK] Correct: Jenkins runs jobs one after another or with limited parallelism, so more jobs mean more total work and longer time.
Understanding how Jenkins handles jobs helps you explain how build pipelines scale and how to manage workload efficiently.
What if Jenkins could run all jobs fully in parallel? How would the time complexity change?