Docker Done Right: From Laptop To Production
We’ll keep it practical, predictable, and mildly entertaining.
Why We Still Care About Docker In 2026
We’ve all heard some variation of “containers solved it,” usually right before something breaks in a way only a container can. Still, docker remains a workhorse because it gives us a repeatable way to package an app plus its runtime, and ship that same artifact across dev laptops, CI, and production. The real win isn’t novelty; it’s fewer “works on my machine” moments and a smaller blast radius when we change things.
What we’ve learned managing fleets of services is that docker is easiest when we treat images as immutable releases, not as tiny snowflake servers. If we’re ssh’ing into containers, installing packages by hand, and wondering why deployments drift, we’re basically recreating the worst parts of VMs—just faster.
Docker also forces (in a good way) conversations we used to postpone: What’s the app’s runtime contract? Which ports are actually needed? What files are required at runtime? How do we inject config safely? These questions show up early when you write a Dockerfile, and that’s a feature.
And yes, there are alternatives and higher-level tools. Kubernetes might run the show, and serverless may grab the headlines, but docker remains the common packaging unit and the fastest path from repo to running process. If we get the fundamentals right—small images, good defaults, safe config—everything above it becomes less dramatic. And fewer dramas means more sleep. We like sleep.
Writing Dockerfiles That Don’t Age Badly
A Dockerfile is a contract with our future selves. If we make it noisy, slow, or vague today, we’ll pay interest later—usually at 2 a.m. during an “urgent” rebuild. Our goal: predictable builds, small images, minimal surprises.
We start by picking a sensible base image. Use official images when possible, and pin versions. “latest” is a fun game until it isn’t. If we’re running a Node app, pick a Node LTS image; if it’s Python, pick a slim Python. For extra paranoia and reproducibility, pin to a digest, but a major/minor version pin is already a big improvement.
Then we structure layers for caching: copy dependency files first, install, then copy the app. That way, changing one source file doesn’t re-install the world. We also avoid installing build tools in the final runtime image by using multi-stage builds. That’s the simplest way to keep images smaller and reduce the number of packages we’re responsible for patching.
A clean Dockerfile also does boring-but-important hygiene: set a working directory, run as a non-root user where practical, and add a healthcheck if the platform won’t do it for us. We keep entrypoints simple, prefer exec form, and avoid shell tricks that swallow signals—because graceful shutdown matters.
If we need deeper best practices, Docker’s own documentation is solid and straightforward: Dockerfile best practices. We’re not aiming for perfection—just “future-us won’t swear at past-us.”
A Practical Multi-Stage Dockerfile (With Notes)
Let’s make this concrete with a small Node example. The pattern applies elsewhere, but Node’s common enough that most teams have tripped over it at least once.
# syntax=docker/dockerfile:1
FROM node:20-alpine AS deps
WORKDIR /app
# Install deps using lockfile for repeatability
COPY package.json package-lock.json ./
RUN npm ci
FROM node:20-alpine AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:20-alpine AS runtime
WORKDIR /app
ENV NODE_ENV=production
# Create a non-root user
RUN addgroup -S app && adduser -S app -G app
# Only copy what we need at runtime
COPY --from=build /app/dist ./dist
COPY --from=build /app/package.json ./package.json
# If you need runtime deps (not bundled), copy node_modules too
COPY --from=deps /app/node_modules ./node_modules
USER app
EXPOSE 3000
CMD ["node", "dist/server.js"]
What’s happening here: we separate dependency install, build, and runtime. That keeps the runtime stage focused and makes rebuilds faster thanks to caching. We use npm ci to respect the lockfile and avoid “it updated a minor dependency and now everything’s on fire.” Alpine is fine for many apps; if we depend on native modules, we test carefully because musl vs glibc can bite.
If you’re building for multiple architectures (hello, M1 laptops and x86 servers), look into Buildx: Docker Buildx. Multi-arch builds are no longer exotic; they’re a Tuesday.
The key mindset: every line in the Dockerfile should earn its place. If we can’t explain why it’s there, remove it. Minimalism isn’t aesthetic—it’s operational.
Compose For Local Dev Without The Ritual Summoning
Docker Compose is where we keep local dev sane. It’s not “production in a box,” and it shouldn’t try to be. It’s a convenient way to run an app plus its dependencies—databases, caches, queues—without asking every engineer to become a part-time installer.
The trick is to keep Compose files readable and explicit. We name services clearly, map ports sparingly, and persist only what must persist. We also avoid baking secrets into the file; local env files are fine, but anything sensitive should be handled with care.
Here’s a Compose example for a web app plus Postgres, with a healthcheck and a named volume:
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://app:app@db:5432/app
depends_on:
db:
condition: service_healthy
db:
image: postgres:16
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: app
POSTGRES_DB: app
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d app"]
interval: 5s
timeout: 3s
retries: 10
volumes:
pgdata:
A couple of practical notes: depends_on doesn’t magically make apps resilient—it just helps with startup ordering. Your app still needs retries. Also, keep the port mappings only for what developers need. If a service is internal, don’t expose it to the host by default.
Compose is documented well and worth a skim when you hit edge cases: Docker Compose overview. Our rule of thumb: if onboarding needs a wiki page longer than the Compose file, we’ve overcomplicated it.
Networking, Volumes, And The Usual Foot-Guns
Most docker “mysteries” come down to networking, storage, or assumptions about what’s inside the container. Let’s save ourselves some debugging hours.
Networking first: containers in the same Compose project share a network and can reach each other by service name. That means db:5432, not localhost:5432. “Localhost” inside a container points back to the container itself, which is a lonely place to find a database. When we publish ports (3000:3000), that’s for host-to-container access, not container-to-container.
Volumes next: volumes are persistence. Bind mounts are convenience. For dev, bind mounts let us edit code on the host and see changes inside the container. For data stores, named volumes are usually safer than bind mounts, because we’re less likely to accidentally trash permissions or mix incompatible versions.
Where teams get burned is treating a container filesystem like it’s durable. It isn’t. If the container goes away, anything not in a volume goes with it. That’s good for stateless apps; it’s catastrophic for databases.
Finally, DNS and proxy assumptions. If your org uses a corporate proxy or custom CA certs, builds can fail in confusing ways. We should standardize how we pass proxy settings into builds and make sure base images trust the right CAs. The least fun problem is “it works at home but not in CI” because the network behaves differently.
When in doubt, go back to basics: inspect networks, exec into containers briefly, and check routes. Docker’s networking docs are actually readable: Docker networking.
Shipping Images: Tags, Registries, And CI That Doesn’t Lie
The moment we build in CI, docker becomes part of our release process, not just tooling. That means we need consistent tagging, traceability, and a registry strategy that doesn’t turn into archaeology.
We prefer immutable tags for releases: commit SHA, build ID, or semver. “latest” can exist, but it’s a pointer, not a release. In practice, we’ll publish myapp:1.8.2 and myapp:sha-abcdef1, and maybe also move myapp:stable once we’ve promoted it. The important part is being able to answer: “What code is running right now?” without guesswork.
Pick a registry that matches your ecosystem: Docker Hub, GHCR, ECR, GCR, etc. Each has quirks, but the core workflow is the same: login, build, push. Enable retention policies so you don’t pay to store every experiment since 2021.
In CI, build once and promote the same image through environments. If we rebuild separately for staging and prod, we’re rolling the dice on supply chain and dependency drift. Even if the Dockerfile is the same, the base image or dependency resolution might not be.
Also: scan images, but don’t turn scans into theatre. Use them to drive patching and base-image updates. If you’re using Docker Scout, it integrates nicely with the ecosystem: Docker Scout.
Our “CI that doesn’t lie” mantra: artifact immutability, clear tags, and logs that show exactly what was built and from which commit. Boring, and therefore excellent.
Security Basics We Don’t Get To Skip
Docker security doesn’t have to be scary, but it does have to be intentional. The default container experience is convenient, not hardened. We can do better with a handful of habits.
First: run as non-root whenever possible. Root inside the container isn’t automatically root on the host, but it increases impact when something breaks out, and it can make lateral movement easier. Creating a dedicated user in the Dockerfile is a small effort for a meaningful reduction in risk.
Second: reduce what’s in the image. Smaller images have fewer vulnerabilities and fewer patches to chase. Multi-stage builds help here, and so does choosing slim base images when compatible. Also, don’t install tools you don’t need at runtime (curl, compilers, package managers). If we need them for debugging, we can use a separate debug image.
Third: treat secrets properly. Don’t bake them into images. Don’t pass them as build args where they can leak into layers. Use runtime injection via orchestrator secrets, environment variables (with care), or mounted secret files. If you’re using BuildKit secrets for build-time access, do it deliberately and confirm nothing lands in the final layers.
Fourth: keep an eye on Linux capabilities and filesystem permissions. Most apps don’t need extra privileges. Avoid --privileged like it’s a suspicious gas station sandwich. If a container needs elevated permissions, we document why.
Docker’s security guidance is a good baseline: Docker security. We don’t need to turn every service into Fort Knox, but we do need to stop leaving the front door open.
Operational Habits: Logs, Healthchecks, And Upgrades
Once dockerized services hit production, our success depends less on container magic and more on operational habits. Containers don’t replace observability, incident response, or upgrade discipline—they just give us a cleaner unit to manage.
Start with logging: write logs to stdout/stderr and let the platform collect them. Don’t log to files inside the container unless you have a clear reason and a volume strategy. Structured logs help, but even plain text is fine if it’s consistent and includes request IDs or correlation fields where possible.
Healthchecks: use them thoughtfully. A healthcheck should reflect “can this instance serve traffic,” not “is the process running.” For web apps, hit a lightweight endpoint that checks dependencies in a sensible way. Don’t make healthchecks so heavy they DDoS your own service.
Graceful shutdown matters more than people think. Containers get stopped. Deployments roll. Nodes die. If the app ignores SIGTERM or takes too long to drain, we get dropped requests and messy retries. Make sure the main process receives signals (use exec form CMD/ENTRYPOINT) and implement shutdown hooks.
Finally, upgrades: base images and dependencies must be refreshed on a schedule. We set a cadence (monthly is common), rebuild, scan, and roll forward. Waiting until a CVE forces an emergency is how “quick patching” turns into “weekend plans cancelled.”
Docker gives us repeatability, not immunity. If we pair it with good operational hygiene—simple builds, clear configs, predictable releases—it stays the helpful tool it was meant to be.



