Artifact fingerprinting in Jenkins - Time & Space Complexity
We want to understand how the time needed to fingerprint artifacts grows as we handle more files.
How does the process scale when the number of artifacts increases?
Analyze the time complexity of the following code snippet.
pipeline {
agent any
stages {
stage('Fingerprint Artifacts') {
steps {
fingerprint '**/*.jar'
}
}
}
}
This pipeline fingerprints all jar files found in the workspace to track their usage.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning and fingerprinting each matching artifact file.
- How many times: Once for each jar file found in the workspace.
As the number of jar files increases, the fingerprinting work grows proportionally.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 fingerprint operations |
| 100 | 100 fingerprint operations |
| 1000 | 1000 fingerprint operations |
Pattern observation: The time grows directly with the number of files; doubling files doubles work.
Time Complexity: O(n)
This means the time to fingerprint grows in a straight line with the number of artifacts.
[X] Wrong: "Fingerprinting all files takes the same time no matter how many files there are."
[OK] Correct: Each file must be processed separately, so more files mean more time.
Understanding how fingerprinting scales helps you explain build pipeline performance clearly and confidently.
"What if we fingerprinted only changed files instead of all files? How would the time complexity change?"