Why multi-stage builds reduce image size in Docker - Performance Analysis
We want to understand how the work done by Docker changes when using multi-stage builds.
Specifically, how does the build process scale with the number of steps and files involved?
Analyze the time complexity of the following Dockerfile using multi-stage builds.
FROM golang:1.20 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp
FROM alpine:latest
COPY --from=builder /app/myapp /myapp
CMD ["/myapp"]
This Dockerfile first builds the app in a full Go environment, then copies only the final app binary into a small Alpine image.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Copying and building source files in the builder stage.
- How many times: Each source file is processed once during the build.
As the number of source files grows, the build step takes longer because it processes more files.
| Input Size (n files) | Approx. Operations |
|---|---|
| 10 | Build processes 10 files |
| 100 | Build processes 100 files |
| 1000 | Build processes 1000 files |
Pattern observation: The build time grows roughly in direct proportion to the number of files.
Time Complexity: O(n)
This means the build time grows linearly with the number of source files processed.
[X] Wrong: "Multi-stage builds always make the build faster."
[OK] Correct: Multi-stage builds reduce final image size but the build time still depends on how many files are compiled and copied.
Understanding how build steps scale helps you explain trade-offs between build speed and image size in real projects.
What if we copied all source files into the final image instead of just the built binary? How would the time complexity and image size be affected?