Why do we use multi-stage builds in Dockerfiles?
Think about how multi-stage builds help with image size and efficiency.
Multi-stage builds let you separate build steps and copy only the needed files to the final image, making it smaller and cleaner.
What will be the output of the following Docker build command if the Dockerfile has two stages named builder and final, and the final stage copies files from builder?
docker build -t myapp .
FROM node:18 AS builder WORKDIR /app COPY package.json . RUN npm install COPY . . RUN npm run build FROM nginx:alpine AS final COPY --from=builder /app/build /usr/share/nginx/html
Consider how --from=builder works in multi-stage builds.
The final stage copies only the build output from the builder stage, so the final image contains only nginx and the built static files.
Which Dockerfile correctly uses multi-stage builds to compile a Go app and produce a minimal final image?
Remember the builder stage needs Go tools, and the final image should be minimal.
Option A uses golang image to build, then copies the binary to a small alpine image for running, which is the correct multi-stage pattern.
You wrote a multi-stage Dockerfile where the final stage copies files from the builder stage, but after building, the final image is missing those files. What is the most likely cause?
Check the paths and build steps in the builder stage.
If the files do not exist at the specified path in the builder stage, copying them will result in missing files in the final image.
You want to optimize your CI pipeline to build Docker images faster using multi-stage builds. Which approach will best speed up builds while keeping images small?
Think about caching and minimizing layers in multi-stage builds.
Using caching for dependencies in the builder stage avoids reinstalling them every build, and copying only needed files keeps the final image small and build fast.