Pipeline utility functions in Jenkins - Time & Space Complexity
We want to understand how the time needed to run pipeline utility functions changes as the input size grows.
Specifically, how does the number of operations increase when processing lists or files in Jenkins pipelines?
Analyze the time complexity of the following Jenkins pipeline code snippet.
def files = findFiles(glob: '**/*.txt')
files.each { file ->
def content = readFile(file.path)
echo "Processing ${file.name}"
}
This code finds all text files in the workspace and reads each file's content one by one.
Look for loops or repeated actions in the code.
- Primary operation: Looping over each file found by
findFiles. - How many times: Once for each text file in the workspace.
As the number of text files increases, the number of times we read and process files grows.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 file reads and echoes |
| 100 | About 100 file reads and echoes |
| 1000 | About 1000 file reads and echoes |
Pattern observation: The work grows directly with the number of files; doubling files doubles work.
Time Complexity: O(n)
This means the time to run grows in a straight line with the number of files processed.
[X] Wrong: "Reading all files happens instantly regardless of how many files there are."
[OK] Correct: Each file read takes time, so more files mean more total time.
Understanding how loops over files or data affect time helps you explain pipeline efficiency clearly in real projects.
"What if we used parallel steps to read files instead of a simple loop? How would the time complexity change?"