A well-crafted Dockerfile is the foundation of a reliable, secure, and efficient containerized application. While it's easy to get a service running in a container, a naive Dockerfile can lead to bloated images, slow build times, and security vulnerabilities. To optimize your Dockerfile for production, you need to go beyond the basics.
This guide walks you through 10 essential best practices for creating production-ready Dockerfiles. By applying these techniques, you will build smaller images, accelerate your CI/CD pipelines, and harden your application's security posture.
1. Use Multi-Stage Builds for Leaner Images
A multi-stage build is the single most effective technique for reducing your final image size. It allows you to use one container image with a full build environment (compilers, dev dependencies, SDKs) and then copy only the necessary application artifacts into a second, minimal runtime image.
This separates the build-time dependencies from the runtime dependencies, ensuring your final image contains only what's needed to run the application.
Here is an example for a Go application.
Before: Single-Stage Build
# Dockerfile
FROM golang:1.21
WORKDIR /app
# Copy all source code and dependencies
COPY . .
# Build the application
RUN go build -o /app/my-app
# Expose port and run the app
EXPOSE 8080
CMD ["/app/my-app"]
This image is large because it includes the entire Go SDK and all the source code.
After: Multi-Stage Build
# Dockerfile
# Stage 1: Build the application
FROM golang:1.21 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/my-app
# Stage 2: Create the final, minimal image
FROM gcr.io/distroless/static-debian11
WORKDIR /
# Copy only the compiled binary from the builder stage
COPY --from=builder /app/my-app /my-app
EXPOSE 8080
CMD ["/my-app"]
The final image is a fraction of the original size because it only contains the compiled Go binary and its runtime dependencies, not the Go toolchain.
2. Choose a Minimal Base Image
The base image you choose has a significant impact on size, security, and performance. Avoid using full OS images like ubuntu or centos when a smaller, purpose-built image will suffice.
- Alpine: A popular choice based on Alpine Linux. It's very small (around 5MB) but uses
musl libcinstead ofglibc, which can cause compatibility issues with some software. - Slim: Many official images offer a
slimvariant (e.g.,python:3.11-slim). These are based on a minimal Debian release and offer a good balance of size and compatibility. - Distroless: Maintained by Google, distroless images contain only your application and its runtime dependencies. They do not include a shell, package manager, or other utilities, which dramatically reduces the attack surface.
Here's how the base image affects a simple Python app.
| Base Image | Size (approx.) | Security Footprint |
|---|---|---|
python:3.11 | ~900 MB | High |
python:3.11-slim | ~120 MB | Medium |
python:3.11-alpine | ~50 MB | Low |
Always choose the most minimal base image that is compatible with your application.
3. Leverage Build Cache with Correct Layer Ordering
Docker builds images in layers. Each instruction in a Dockerfile creates a new layer. Docker caches these layers and reuses them on subsequent builds if the instruction and its inputs have not changed.
To take advantage of the build cache, order your instructions from least to most frequently changed.
Before: Inefficient Layering
# Dockerfile
FROM node:20-slim
WORKDIR /app
# Copies all source code, invalidating the cache on any file change
COPY . .
# Installs dependencies
RUN npm install
EXPOSE 3000
CMD ["node", "src/index.js"]
In this example, changing a single line of code in a source file invalidates the COPY . . layer, forcing Docker to re-run npm install on every build, even if package.json hasn't changed.
After: Optimized Layering
# Dockerfile
FROM node:20-slim
WORKDIR /app
# Copy only the files needed for dependency installation first
COPY package.json package-lock.json ./
# This layer is only invalidated when package files change
RUN npm install
# Now copy the rest of the source code
COPY . .
EXPOSE 3000
CMD ["node", "src/index.js"]
By copying package.json and running npm install first, Docker can reuse the cached node_modules layer as long as your dependencies remain unchanged. This makes subsequent builds much faster.
4. Use a .dockerignore File
The .dockerignore file works just like .gitignore. It lets you exclude files and directories from the build context - the set of files sent to the Docker daemon during a build.
Sending an unnecessarily large build context slows down the build process and can bloat your image with files that aren't needed at runtime.
Create a .dockerignore file in the same directory as your Dockerfile.
# .dockerignore
# Git and CI/CD files
.git
.github
.vscode
# Local environment and logs
.env
*.log
npm-debug.log
# Dependencies that will be installed inside the container
node_modules
# OS-specific files
.DS_Store
Thumbs.db
This simple step prevents sensitive files, local configurations, and bulky directories from ever reaching your image.
5. Consolidate RUN Instructions and Clean Up
Each RUN instruction creates a new image layer. To reduce the number of layers and the final image size, consolidate related commands into a single RUN instruction using the && operator.
Crucially, you should also clean up any temporary files or package manager caches within the same RUN instruction. If you clean up in a separate RUN command, the files from the previous layer are still part of the image, even if they appear deleted.
Before: Multiple Layers and No Cleanup
# Dockerfile
FROM debian:bullseye-slim
RUN apt-get update
RUN apt-get install -y curl
# Cache files from apt-get are left behind
After: Consolidated and Cleaned
# Dockerfile
FROM debian:bullseye-slim
RUN apt-get update && \
apt-get install -y curl && \
rm -rf /var/lib/apt/lists/*
# The layer is created after the cache is cleaned, reducing image size
This single RUN command updates the package list, installs curl, and cleans up the cache all within one layer, resulting in a smaller final image.
6. Prefer COPY Over ADD
Both COPY and ADD can be used to get files into your image, but they have key differences. COPY is more explicit and predictable. It simply copies files and directories from your build context into the image.
ADD has additional functionality: it can fetch remote URLs and automatically extract compressed files (like .tar.gz). While sometimes useful, this magic can lead to unexpected behavior and security risks. For example, a remote URL could change, or a compressed file could contain a path traversal vulnerability (../../).
Unless you specifically need the auto-extraction or URL-fetching features of ADD, always use COPY.
7. Run as a Non-Root User
By default, containers run as the root user. This is a security risk. If an attacker gains control of your application, they will have root privileges inside the container, which could be used to escalate their attack.
Always create a dedicated, unprivileged user to run your application.
Here is how you do it in your Dockerfile.
# Dockerfile
FROM node:20-slim
# ... (install dependencies, copy code)
# Create a dedicated user and group
RUN addgroup --system --gid 1001 myapp && \
adduser --system --uid 1001 myapp
# Switch to the new user
USER myapp
# Set ownership of app files
WORKDIR /app
COPY --chown=myapp:myapp . .
EXPOSE 3000
CMD ["node", "src/index.js"]
Running as a non-root user is a critical security best practice that follows the principle of least privilege.
8. Be Specific with Image Tags
Avoid using the latest tag for your base images in production Dockerfiles. The latest tag is mutable and can point to different versions of an image over time. This can cause unexpected breaking changes in your builds.
Be as specific as possible with your tags.
- Bad:
nodeornode:latest - Good:
node:20(pins to major version) - Better:
node:20.10.0(pins to a specific patch version) - Best:
node:20.10.0-slim(pins to a specific version and variant)
Using specific tags ensures your builds are reproducible and predictable.
9. Minimize the Build Context
When you run docker build, the directory you specify is the build context. The Docker CLI sends this entire directory (minus anything in .dockerignore) to the Docker daemon.
If your Dockerfile is in the root of a large repository, the build context can be enormous, slowing down the docker build command significantly.
To minimize the build context, create a dedicated directory for your application that contains only the Dockerfile and the necessary source code. If that's not possible, be extra diligent with your .dockerignore file.
10. Implement a HEALTHCHECK
A HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. This is invaluable for production orchestration systems like Kubernetes or Docker Swarm, which can use this information to restart unhealthy containers automatically.
A good health check goes beyond "is the process running?" and confirms that the application is actually responsive.
Here is an example for a web server.
# Dockerfile
FROM nginx:alpine
# ... (copy your website files)
# Healthcheck instruction
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl --fail http://localhost/ || exit 1
EXPOSE 80
This command tells Docker to run curl inside the container every 30 seconds. If the command fails, the container is marked as unhealthy.
How Miget Simplifies Containerization
Optimizing Dockerfiles is a powerful skill, but it requires continuous maintenance. For teams that want production-ready containers without managing Dockerfiles, Miget offers a streamlined solution.
Our open-source migetpacks use Cloud Native Buildpacks to automatically transform your application source code into a secure, efficient container image. With support for 14 languages, migetpacks handle all the best practices for you:
- Automatic multi-stage builds create minimal, production-optimized images.
- Intelligent caching ensures fast, repeatable builds.
- Security hardening is built-in, with regular patching of base images.
- Zero Dockerfile configuration required. Just push your code.
On Miget, you can deploy your application from a Git repository, and we handle the containerization for you, letting you focus on your code instead of your build pipeline.
Next Steps
You now have 10 actionable strategies to optimize your Dockerfile for production. By focusing on multi-stage builds, minimal base images, and secure practices, you can build containers that are lean, fast, and robust.
- Read the official Dockerfile best practices guide.
- Explore Miget's documentation to learn about deploying without Dockerfiles.
- Join our Discord community to chat with other developers.
What to read next
- Dockerfile vs Buildpacks: Which Should You Choose? - Decide whether you need a Dockerfile at all
- One-Click Docker Hosting - Deploy your optimized Dockerfile to Miget in minutes
- Deploy Any Language to Docker Without Writing a Dockerfile - Skip the Dockerfile entirely with migetpacks