What is the main reason to use multi-stage builds when aiming to reduce Docker image size by 80%?
Think about how build tools and runtime environments differ in what files they need.
Multi-stage builds let you compile or build your app in one stage with all tools, then copy only the final app files to a smaller base image. This removes unnecessary build files and reduces image size.
Given a Dockerfile that uses alpine as the base image and copies only the compiled binary, what is the expected approximate size of the final image?
Alpine Linux base images are known for being very small.
Alpine base images are minimal, around 5 MB. Copying only the compiled binary keeps the image size very small, typically near 5 MB.
Which Dockerfile snippet correctly uses multi-stage build to reduce final image size by copying only the compiled binary?
FROM golang:1.20 AS builder WORKDIR /app COPY . . RUN go build -o myapp FROM alpine:latest COPY --from=builder /app/myapp /usr/local/bin/myapp ENTRYPOINT ["/usr/local/bin/myapp"]
Look for the snippet that builds in one stage and copies only the binary to a smaller image.
Option A uses a multi-stage build: first stage builds the app with Go, second stage uses Alpine and copies only the binary. This reduces image size significantly.
You used a multi-stage Dockerfile to reduce image size, but the final image is still over 500 MB. What is the most likely cause?
Check what files are copied into the final image.
If you copy the whole build directory instead of just the compiled binary, all source files and dependencies remain, making the image large.
In a CI/CD pipeline, what is the best practice to ensure Docker images are consistently reduced by 80% in size?
Think about combining multiple strategies for image size reduction.
Combining multi-stage builds, cleaning files, and using minimal base images ensures small, efficient images in automated pipelines.