Docker Done Right: Practical Habits We’ll Actually Keep

docker

Docker Done Right: Practical Habits We’ll Actually Keep

Less magic, fewer surprises, and containers that behave on Mondays.

Why We Use docker (And What It’s Not)

We don’t use docker because it’s trendy or because we enjoy learning yet another flag. We use it because it gives us a repeatable runtime: the same app, the same dependencies, the same setup—whether it’s a laptop, a CI runner, or a production node at 3 a.m. That repeatability is the real payoff.

But docker isn’t a silver bullet. It doesn’t automatically make apps secure, fast, or scalable. It doesn’t fix poor dependency hygiene, and it definitely doesn’t replace good operational practices. If we containerize a messy build, we’ve simply made a portable mess.

A useful mental model: containers are processes with seatbelts. They’re still the same Linux processes (or Windows processes), with namespaces and cgroups keeping them in their lane. That’s why “it works in docker” should still mean “it works with the same environment variables, network assumptions, file permissions, and resource limits we’ll use later.”

We also need to be clear about the layers we’re dealing with:
Image: the packaged filesystem + metadata.
Container: a running instance of that image.
Registry: where images live (public or private).
Engine: the daemon doing the heavy lifting locally or on a host.

If you want the formal details, Docker’s own docs are solid and readable: Docker overview. We’ll keep the rest of this post grounded in habits that make docker feel boring—in the best possible way.

Dockerfiles We Don’t Regret Later

Most docker pain starts with the Dockerfile. Not because Dockerfiles are hard, but because we treat them like an afterthought. Our goal is simple: build images that are small, predictable, and cache-friendly.

A few rules we try to follow:
Pin base images (at least to major/minor). node:20-alpine is better than node:latest. Even better: pin by digest when you can.
Copy lockfiles early to maximize layer caching.
Use multi-stage builds so build tools don’t ship to production.
Run as a non-root user unless we have a very good reason.
Don’t bake secrets into images. Ever.

Here’s a Node example we’ve used (trimmed, but real-world shaped):

# syntax=docker/dockerfile:1

FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

FROM node:20-alpine AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

FROM node:20-alpine AS runtime
WORKDIR /app

# Create an unprivileged user
RUN addgroup -S app && adduser -S app -G app

ENV NODE_ENV=production
COPY --from=build /app/dist ./dist
COPY --from=deps /app/node_modules ./node_modules
COPY package.json ./

USER app
EXPOSE 3000
CMD ["node", "dist/server.js"]

This gives us fast rebuilds, smaller runtime images, and fewer “why is gcc installed in prod?” moments. For more on Dockerfile instructions and best practices, Docker’s reference is handy: Dockerfile reference.

Compose: Our Default Local Environment Glue

For local development, docker compose is our go-to because it makes multi-service setups less annoying. One command to start everything, one file to review, and fewer bespoke scripts.

A Compose file should be readable and predictable. We try to keep it focused on:
– declaring services and their dependencies,
– wiring networks and ports,
– mounting code for dev,
– and setting environment variables (without committing secrets).

Here’s a practical Compose setup for an app + Postgres, with healthchecks and sane defaults:

services:
  app:
    build:
      context: .
      target: runtime
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgres://app:app@db:5432/app
    depends_on:
      db:
        condition: service_healthy

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD: app
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d app"]
      interval: 5s
      timeout: 3s
      retries: 10

volumes:
  pgdata:

A couple of small-but-mighty habits:
– Use named volumes for stateful data so restarts don’t wipe your world.
– Prefer container-to-container traffic over localhost assumptions.
– Add healthchecks so “depends_on” means something useful.

Docker’s Compose docs are straightforward and worth bookmarking: Compose overview.

Image Size, Build Speed, and Caching Without Tears

We’ve all watched a CI job download half the internet because one line in a Dockerfile invalidated the cache. The trick is to design builds that change in small increments.

What helps most:
1. Order Dockerfile layers from least-changing to most-changing.
2. Keep build contexts small with a .dockerignore.
3. Use BuildKit features when available.

A .dockerignore that we’ve seen pay for itself:

node_modules
dist
.git
.gitignore
Dockerfile
docker-compose.yml
README.md

npm-debug.log
.env
coverage

Yes, excluding Dockerfile looks weird at first—include it if your build needs it in context (often it doesn’t). The main aim is to stop sending junk to the daemon.

If we want faster builds, we also check:
– Are we downloading dependencies every time?
– Are we compiling things in the final image?
– Are we copying the entire repo before running dependency installs?

For deeper image analysis, we like docker image history and docker build --progress=plain. When we need to get serious, we’ll pull in a scanner and SBOM tooling. Docker Scout is one option (especially if you’re already in that ecosystem): Docker Scout.

The broader point: if we treat image builds like production code—reviewed, repeatable, and measured—docker stops being slow “sometimes” and starts being boringly fast.

Security Basics: Non-Root, Secrets, and Fewer Surprises

Let’s be honest: most docker “security incidents” aren’t exotic kernel attacks. They’re basics we skipped because we were in a hurry.

Our baseline checklist:
Run as non-root in the runtime stage.
Use minimal base images where sensible (but don’t make debugging impossible).
Don’t ship build tools in production images.
Scan images as part of CI.
Handle secrets properly (environment variables at runtime, secret stores, or orchestrator secrets).

If we need a quick reference for container security fundamentals, OWASP’s cheat sheet is a solid read: OWASP Docker Security Cheat Sheet.

Also, don’t forget the host. Containers share the kernel. If the host is unpatched or overly permissive, the “container boundary” becomes more of a polite suggestion.

One practical habit: avoid mounting the Docker socket into containers. It’s convenient, but it effectively hands out keys to the kingdom. If we truly need it (CI build containers, for example), we isolate that workload and treat it like a privileged operation—because it is.

Finally, keep permissions predictable. If our app needs to write to /app/uploads, make that explicit and owned by the runtime user. Most “docker file permission” dramas are self-inflicted.

Networking and Observability: Make It Obvious

Networking is where docker can feel like it’s gaslighting us. “It’s running, but I can’t reach it.” Usually it’s one of these:
– The service is listening on 127.0.0.1 inside the container, not 0.0.0.0.
– The port is not published to the host.
– We’re mixing up container DNS names with localhost.
– A firewall or VPN is doing its own thing.

We standardize a few things:
– Services bind to 0.0.0.0 in containers.
– Compose service names become DNS names (db, redis, etc.).
– We keep port mappings in Compose, not hidden in scripts.

For visibility, we want logs and metrics to be boringly accessible:
– Send logs to stdout/stderr (no log files unless there’s a real need).
– Add basic health endpoints (/healthz, /readyz).
– Capture container exit codes and restart reasons.

And yes, sometimes we exec into containers. We try not to live there, but it’s useful for quick checks:

  • docker exec -it <container> sh
  • netstat -tulpn (or ss -tulpn)
  • env | sort

If we’re on Kubernetes later, these habits transfer cleanly. Docker is not Kubernetes, but good container hygiene is universal.

CI/CD With docker: Repeatable Builds, Predictable Releases

The moment we put docker into CI, we’re making a promise: the artifact we test is the artifact we ship. That’s a good promise, but only if we keep our process consistent.

What we aim for:
Build once per commit/tag.
Tag images with both a unique identifier (commit SHA) and a human-friendly tag (semver).
Push to a registry from CI with least-privilege credentials.
Promote the same image across environments (dev → staging → prod).

A typical tagging pattern:
myapp:1.8.3
myapp:1.8.3-<gitsha>
myapp:<gitsha>

We also like to keep build metadata visible using labels (owner, commit, repo URL). It makes “what is running?” much easier to answer when someone pings us with a screenshot of a 500 error and no other context. (A classic.)

One more thing: don’t let CI rebuild dependencies unnecessarily. Cache layers where possible, and keep the dependency install step stable. If we’re using GitHub Actions, GitLab CI, or similar, each has ways to persist caches or reuse layers—worth the setup time if builds happen often.

When releases are predictable, incidents are rarer. When releases are chaotic, docker becomes the scapegoat. It’s innocent. Mostly.

Share