Test result trends in Jenkins - Time & Space Complexity
We want to understand how the time to analyze test result trends changes as the number of test results grows.
How does the work needed grow when we have more test results to process?
Analyze the time complexity of the following Jenkins pipeline snippet.
pipeline {
agent any
stages {
stage('Analyze Test Results') {
steps {
script {
def results = junit '**/test-results/*.xml'
def trendData = []
for (result in results) {
trendData.add(result.getStatus())
}
}
}
}
}
}
This code collects test results and loops through each to gather status data for trend analysis.
Look for repeated actions in the code.
- Primary operation: Looping through each test result to get its status.
- How many times: Once for every test result found.
As the number of test results increases, the loop runs more times.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 status checks |
| 100 | 100 status checks |
| 1000 | 1000 status checks |
Pattern observation: The work grows directly with the number of test results.
Time Complexity: O(n)
This means the time to analyze test results grows in a straight line as more results come in.
[X] Wrong: "The time to analyze test results stays the same no matter how many results there are."
[OK] Correct: Each test result needs to be checked, so more results mean more work and more time.
Understanding how processing time grows with data size helps you explain efficiency in real projects.
"What if we stored test results in batches and processed only new batches each time? How would the time complexity change?"