Reducing final image size by 80 percent in Docker - Time & Space Complexity
When we reduce a Docker image size, we want to know how the effort grows as the image content grows.
We ask: How does the time to build and optimize the image change as the image layers or files increase?
Analyze the time complexity of the following Dockerfile snippet.
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN rm -rf tests docs
CMD ["python", "app.py"]
This Dockerfile installs dependencies, copies app files, and removes unnecessary folders to reduce image size.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Copying files and installing dependencies.
- How many times: Each file and dependency is processed once during build.
As the number of files and dependencies grows, the build time grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 files/deps | About 10 operations |
| 100 files/deps | About 100 operations |
| 1000 files/deps | About 1000 operations |
Pattern observation: Doubling files or dependencies roughly doubles the work.
Time Complexity: O(n)
This means the build time grows linearly with the number of files and dependencies processed.
[X] Wrong: "Removing files after copying them doesn't affect build time much."
[OK] Correct: Removing files still requires processing those files during copy, so the build time depends on total files handled.
Understanding how build time scales with image content helps you explain trade-offs in Docker optimization clearly and confidently.
"What if we used multi-stage builds to copy only needed files? How would the time complexity change?"