Docker Done Right: Practical Habits We Actually Use

docker

Docker Done Right: Practical Habits We Actually Use

Less magic, more repeatable builds, calmer deploys.

Build Images We Can Rebuild Tomorrow

We’ve all been there: an image “worked yesterday” and today it’s a haunted artefact. Most of the pain comes from builds that aren’t reproducible—base images drift, package mirrors change, and “latest” means “surprise me”. Our first habit is boring on purpose: we pin what we can, document what we can’t, and make rebuilds predictable.

Start with the base image. If we can, we pin to a specific tag (or even a digest) rather than latest. Tags can move; digests don’t. Next, we keep our Dockerfiles small and readable. A 200-line Dockerfile is basically a short novel where the plot doesn’t make sense on page 3.

We also treat the build context like luggage: pack light. A huge context makes builds slow and cache-unfriendly. A .dockerignore is often the cheapest performance win in the room.

Caching matters too. We structure layers so stable steps come first (install OS packages, restore dependencies) and frequently changing steps come last (copy app source). That way, Docker’s cache helps us rather than taunts us.

Finally, we don’t let images become junk drawers. We pick a convention for labels, versions, and ownership so we can answer: “Who built this? From what commit? When?” It’s not glamorous, but it’s how we avoid the 2 a.m. archaeology expedition.

If you want a solid baseline for Dockerfile guidance, Docker’s own docs are worth bookmarking: Dockerfile reference.

Keep Dockerfiles Boring (In A Good Way)

A “clever” Dockerfile is rarely a gift to Future Us. Our goal is to make intent obvious: what we’re installing, why it’s there, and what the container runs. We like multi-stage builds because they keep runtime images smaller and reduce the number of random build tools shipped to production. It’s like packing for a trip and leaving the power drill at home.

Here’s a simple pattern we use for a Node app. The point isn’t Node specifically—the point is separating build-time from run-time and keeping layers tidy:

# syntax=docker/dockerfile:1

FROM node:20-bookworm AS build
WORKDIR /app

COPY package*.json ./
RUN npm ci

COPY . .
RUN npm run build

FROM node:20-bookworm-slim AS runtime
ENV NODE_ENV=production
WORKDIR /app

COPY --from=build /app/dist ./dist
COPY --from=build /app/package*.json ./
RUN npm ci --omit=dev && npm cache clean --force

USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]

A few habits are hiding in there. We copy dependency manifests first to maximize cache hits. We avoid running as root in the runtime stage. We keep the runtime image slim, and we don’t ship build caches unless we need them.

We also resist the temptation to curl arbitrary scripts into sh. If we need that, we pin checksums and document it. Less “works on my laptop,” more “works after the laptop is gone.”

For deeper patterns, multi-stage builds are well documented here: Use multi-stage builds.

Use docker compose For Local Environments That Don’t Bite

Local setups should feel like a quick coffee, not a long-term relationship with unsolved issues. We lean on docker compose to make local environments consistent across the team—same ports, same dependent services, same environment variables. It reduces the “what version of Postgres are you running?” chats (which are never as fun as they sound).

We like to model local dev around a few principles: keep state in named volumes, put secrets in an .env file (not the Compose file), and explicitly define dependencies. Also: don’t over-model production. Local is allowed to be simpler—as long as it’s predictable.

A minimal Compose setup might look like this:

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgres://app:app@db:5432/app
    depends_on:
      - db
    volumes:
      - ./:/app:delegated

  db:
    image: postgres:16
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: app
      POSTGRES_DB: app
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

We’ve learned to be explicit with ports so people don’t play “guess the ephemeral mapping”. We also like volumes for databases so data survives container restarts. For app code, bind mounts are fine in dev, but we keep an eye on performance (especially on macOS/Windows).

Compose has matured nicely; the official docs are clear and current: Compose file reference.

Treat Networking Like A First-Class Citizen

Containers talking to each other is easy—until it isn’t. Most of our docker networking “mysteries” come down to two things: misunderstanding DNS inside Compose networks and confusing container ports with host ports.

Within a Compose project, service names become DNS names. That means db:5432 works from app because they’re on the same default network. But your host machine won’t resolve db, so tooling on your laptop must use localhost:5432 (if you published it). That’s not Docker being tricky; it’s just two different network contexts.

We also try to avoid publishing ports unnecessarily. Publishing is for “I need this from the host,” not “I might need it someday.” For internal services, we keep them un-published and rely on the internal network. It reduces the surface area and avoids port conflicts across projects.

When things go sideways, we use a tiny toolbox:
docker ps to confirm what’s running and what’s published.
docker inspect <container> to see networks and IPs (rarely needed, but helpful).
docker logs <container> for obvious failures.
– A temporary debug container in the same network to test DNS and connectivity.

If you want a practical refresher, Docker’s networking overview is a good read: Docker networking.

We don’t try to memorize every networking mode. We just aim for clear defaults: bridge networks for local, explicit service names, and minimal published ports.

Make Containers Safer Without Becoming Miserable

Security doesn’t have to be a ceremony. For docker, a handful of choices give us most of the benefit without turning every PR into a philosophical debate.

First: don’t run as root unless there’s a strong reason. Many base images have a non-root user baked in (like node). If not, we create one. Second: keep images small. Smaller images tend to have fewer packages, which tends to mean fewer things to patch. It’s not perfect, but it’s sensible.

Third: be intentional about secrets. We don’t bake secrets into images. For local dev, .env files are fine (kept out of Git). For real environments, we use a secret manager or platform-native mechanisms. Docker has a concept of secrets in Swarm, and most orchestrators have their own approaches. The important bit is: don’t put passwords in Dockerfiles and then wonder why they show up in image history.

We also scan images as part of CI. Not because scanning is magical, but because it’s a cheap early warning. Even if you don’t fix everything immediately, you’ll at least know what’s in there. The CIS Docker Benchmark is another good “are we doing the basics?” checklist.

And we keep build tools out of runtime images via multi-stage builds. If gcc isn’t in production, it can’t be abused in production. That’s not paranoia; that’s just tidying up.

Logs, Health Checks, And The Stuff That Saves Weekends

If we’re honest, most docker problems aren’t “Docker problems.” They’re “our app crashed and didn’t say why” problems. Containers just make that failure faster and more portable. So we focus on observability basics: logs to stdout/stderr, clear health checks, and predictable startup behaviour.

For logging, we don’t write to files inside containers unless we have a reason. File logs inside a container turn into a game of “where did the disk go?” Standard streams are easier: Docker can collect them, Compose can show them, and your platform can ship them.

Health checks are another quiet hero. A container being “up” doesn’t mean it’s usable. A health check gives us a way to gate dependencies and spot crash loops. We keep checks simple: call a /health endpoint, or verify a local TCP port is listening.

A basic example:

HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1

In Compose, we sometimes use health checks plus depends_on conditions (where supported) to avoid the “app tries DB once, gives up forever” classic.

When debugging, we rely on docker logs -f, and we keep the app’s startup logs informative: config loaded, DB connected, migrations applied (or not). It’s amazing how often the fix is visible in the first 20 lines—if we bothered to print them.

Clean Up, Name Things, And Keep The House Livable

Docker is tidy right up until it isn’t. Then we’ve got 40 dangling images, five networks called “default,” and a disk that quietly hit 0 bytes free during a build. Our final habit is housekeeping—because disk space is a finite resource and containers are enthusiastic.

We start by naming things. Tags include the app name and version. Containers started manually get --name if we expect to touch them again. Compose projects get consistent folder names so network/volume names don’t become a guessing game.

Next: prune intentionally. We don’t run docker system prune -a on a shared machine without thinking (it’s the digital equivalent of “who moved my cheese?”). But on CI runners and dev laptops, periodic cleanup is healthy. We also teach folks the difference between removing stopped containers, unused images, and volumes. Volumes are where the “oops” lives.

We also keep an eye on build cache growth, especially with BuildKit. It’s great for speed, but it can accumulate. The goal isn’t zero cache; it’s controlled cache.

And we try to make cleanup discoverable: a make clean-docker target or a short script in the repo. When cleanup is one command, people actually do it.

When in doubt, docker system df gives a quick “what’s eating my disk?” overview, and from there we can prune with precision.

Share