GitLab CI with Docker - Time & Space Complexity
We want to understand how the time to run a GitLab CI pipeline using Docker changes as we add more jobs or steps.
How does the total execution time grow when the pipeline gets bigger?
Analyze the time complexity of the following GitLab CI snippet using Docker.
image: docker:latest
services:
- docker:dind
stages:
- build
- test
build_job:
stage: build
script:
- docker build -t myapp .
test_job:
stage: test
script:
- docker run myapp pytest
This pipeline builds a Docker image and then runs tests inside a container from that image.
Look for repeated steps or loops in the pipeline.
- Primary operation: Each job runs its script commands sequentially.
- How many times: The pipeline runs each job once per pipeline execution; no loops inside the snippet.
As we add more jobs or steps, the total time grows roughly by adding each job's time.
| Input Size (number of jobs) | Approx. Operations (job runs) |
|---|---|
| 2 | 2 jobs run sequentially |
| 10 | 10 jobs run sequentially |
| 100 | 100 jobs run sequentially |
Pattern observation: The total time grows linearly as more jobs are added.
Time Complexity: O(n)
This means the total pipeline time grows in direct proportion to the number of jobs.
[X] Wrong: "Adding more jobs will not increase total time because they run in parallel."
[OK] Correct: In this example, jobs run in stages sequentially, so total time adds up, not stays the same.
Understanding how pipeline steps add up helps you design efficient CI/CD workflows and explain your choices clearly.
What if we changed the pipeline to run all jobs in parallel? How would the time complexity change?