Building images in CI pipeline in Docker - Time & Space Complexity
When building Docker images in a CI pipeline, it is important to understand how the time taken grows as the project size or number of steps increases.
We want to know how the build time changes when we add more files or layers to the image.
Analyze the time complexity of the following Docker build commands in a CI pipeline.
FROM python:3.12-slim
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY . /app
RUN python setup.py install
This Dockerfile installs dependencies and copies project files to build the image.
Look for steps that repeat or scale with input size.
- Primary operation: Copying project files and installing dependencies.
- How many times: Each RUN and COPY command runs once, but the amount of data copied and installed grows with project size.
As the number of files or dependencies grows, the time to copy and install increases roughly in proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 files/deps | Short copy and install time |
| 100 files/deps | About 10 times longer copy and install time |
| 1000 files/deps | About 100 times longer copy and install time |
Pattern observation: Time grows roughly linearly with the amount of data handled.
Time Complexity: O(n)
This means the build time grows in direct proportion to the size of the files and dependencies being processed.
[X] Wrong: "Adding more files won't affect build time much because Docker caches layers."
[OK] Correct: While caching helps, any change or new files cause Docker to re-run steps, making build time grow with input size.
Understanding how build time scales helps you design efficient CI pipelines and explain trade-offs clearly in real projects.
What if we split the Dockerfile into multiple smaller images? How would the time complexity change?