Ship Faster With Docker: 11-Second Builds, Fewer Regrets

docker

Ship Faster With Docker: 11-Second Builds, Fewer Regrets
It’s time to make containers small, safe, and boringly fast.

The Build That Pays For Itself: Multi-Stage, Caching, 11-Second Builds
We’ve all watched CI logs like a slow movie: packages install, layers shuffle, a fan spins up, and coffee cools. The first step to real speed is multi-stage builds plus BuildKit caching. With multi-stage, we compile in one stage and ship only what we need in the final stage. With BuildKit, we cache dependencies and compile artifacts so rebuilds take seconds, not minutes. If your build is idempotent and your layers are stable, you can get repeatable 11-second rebuilds that feel like cheating.

Here’s a compact example using Go that works nicely on laptops and in CI when paired with registry cache exports:

Dockerfile

syntax=docker/dockerfile:1.6

FROM –platform=$BUILDPLATFORM golang:1.22-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
RUN –mount=type=cache,target=/go/pkg/mod \
–mount=type=cache,target=/root/.cache/go-build \
apk add –no-cache git
COPY go.mod go.sum ./
RUN –mount=type=cache,target=/go/pkg/mod go mod download
COPY . .
RUN –mount=type=cache,target=/root/.cache/go-build \
–mount=type=secret,id=git_ssh \
GOOS=$TARGETOS GOARCH=$TARGETARCH go build -trimpath -o /out/app ./cmd/app

FROM gcr.io/distroless/static:nonroot
COPY –from=build /out/app /app
USER 65532:65532
ENTRYPOINT [“/app”]

The key is BuildKit mounts for caching and secrets. We keep stages deterministic by keeping COPY lines low in the file so earlier cacheable steps stay hit. In CI, push and pull cache with –cache-to and –cache-from (registry mode) so teammates don’t rebuild the universe. For syntax and tricks like –mount, the BuildKit documentation is worth a skim. And if you want to squeeze another second out, reorder the Dockerfile to put the slowest, least-frequently-changing steps first. The fewer invalidated layers, the faster your feedback loop.

Images On A Diet: Distroless, SBOMs, And Layer Hygiene
Let’s talk shipping weight. Big images slow pulls, waste storage, and expand your attack surface. We get lean by using smaller bases, removing build tools from runtime, and avoiding accidental “junk drawers” like left-behind package caches. Distroless or scratch-based images are fantastic when the app is a single static binary. If you need a shell for debugging, keep it in a separate debug tag rather than lugging it into production.

Layer hygiene matters more than we admit. Combine related RUN steps, but don’t cram everything into one unreadable line; strike a balance. Always use no-cache flags when your package manager supports it and delete temporary files within the same layer so they don’t persist. When you copy from a build stage, copy only the final artifacts, not the entire directory. If your app needs certificates or timezones, install them explicitly in a tiny base layer rather than inheriting them implicitly from a fat parent image.

We also recommend generating an SBOM during the build and archiving it with the image. BuildKit can emit SBOMs, and many registries now store them as artifacts. That makes it straightforward to answer “Where does this libssl come from?” without spelunking. Docker’s own guide on Dockerfile best practices is surprisingly readable and will save you from innocent mistakes like ADD’ing remote URLs or leaving writable temp directories around. Make size a first-class KPI: when PRs add 200 MB to the image, we should notice and ask why. The healthiest images are a little boring: predictable layers, explicit contents, and no mystery meat.

Runtime Safety Without Drama: Users, Capabilities, And Read-Only Filesystems
We don’t need a thousand security knobs to be safer; we need three or four defaults we never forget. First, never run as root if the process doesn’t require it. Many images ship with a non-root user baked in; if not, create one. Second, lock down Linux capabilities. Containers start with a generous set they rarely need. Drop most of them and add back only what’s required. Third, make the root filesystem read-only and mount explicit write locations as tmpfs or bind mounts.

Dockerfile
FROM gcr.io/distroless/static:nonroot

app binary copied from builder stage

COPY –from=build /out/app /app
USER 65532:65532

Make an explicit writable directory if needed

WORKDIR /home/nonroot
ENTRYPOINT [“/app”]

docker run example:
docker run \
–read-only \
–cap-drop=ALL \
–cap-add=NET_BIND_SERVICE \
-v app-tmp:/home/nonroot \
myorg/myapp:1.4

That’s usually enough to defang privilege escalations and accidental writes. If you must debug in production (we’ve all been there), do it with a separate debug image that has a shell and extra tools, not by bloating the main one. And if you’re writing a Compose file, keep security options right there so they don’t get “temporarily” omitted in a rush. The OWASP Docker Security Cheat Sheet lays out the essentials in practical language. None of this is thrilling engineering, but it’s the boring discipline that makes paging less likely at 2 a.m. Finally, remember file permissions: COPY preserves ownership poorly if you don’t set USER after; set USER early and chown as needed to avoid sticky root-owned artifacts.

Local Dev That Mirrors Prod: Compose, Overrides, And .env
We don’t want “works on my machine” to mean “only on my machine.” The trick is a Compose file that mirrors production defaults, with a lightweight override for local knobs. Keep environment variables in a .env file for portability, and avoid hardcoding anything that changes per developer. Compose profiles are handy for optional services like local queues or UIs. Healthchecks help services start in the right order without weird sleep hacks.

docker-compose.yml
version: “3.9”
services:
api:
build:
context: .
dockerfile: Dockerfile
image: myorg/api:dev
env_file: .env
ports:
– “8080:8080”
depends_on:
db:
condition: service_healthy
healthcheck:
test: [“CMD”, “wget”, “-qO-“, “http://localhost:8080/health”]
interval: 10s
timeout: 2s
retries: 5
deploy:
resources:
limits:
cpus: “1.0”
memory: “512M”
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: example
healthcheck:
test: [“CMD-SHELL”, “pg_isready -U postgres”]
interval: 10s
timeout: 2s
retries: 5

The value in this setup is consistency. If the app expects a read-only root filesystem in prod, make it read-only locally too. If prod uses a non-root user, do it here as well. We can still have fast feedback loops by mounting source code and using live reload, but the runtime stance should be the same. When we switch the image tag from dev to release, nothing else should need to change. And please, no mystery environment variables hidden in shell profiles; if the container needs it, it belongs in the Compose config or .env file we commit or manage safely.

Kubernetes Pulls Without Surprises: Tags, Digests, And Rollouts
Tags are friendly, but they lie. latest moves around and so do floating tags like stable. In clusters, we want deterministic pulls. The practical compromise is to use immutable tags for humans and digests for machines. We keep a human-readable tag like 1.4.7, but pin the Deployment to the corresponding image digest so every node pulls the exact bits we tested. That removes the “but it worked in staging” whodunit from our lives.

Here’s what that looks like:

k8s-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
– name: api
image: myorg/api@sha256:1b2c3d…deadbeef
imagePullPolicy: IfNotPresent
ports:
– containerPort: 8080
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true

You can still annotate the manifest with the friendly tag for clarity, but the digest does the pinning. Also, use sensible imagePullPolicy values and understand what your cluster will do with cached images. The Kubernetes docs on container images and pull policies are the definitive source, and they explain why Always often isn’t the magic fix we think it is. When rollouts happen, deploy using progressive strategies (like canaries or 10% steps) and watch for readiness. When the digest stays constant across dev, staging, and prod, you’ll sleep better and spend less time hunting phantom changes.

Observability You’ll Actually Read: Healthchecks, Probes, And Exit Codes
We can’t fix what we can’t see. Containers make it easy to mask failing processes behind a running PID 1 that never dies. Let’s give the platform honest signals. In Docker, HEALTHCHECK turns your image from a hopeful shrug into a verified heartbeat. In Kubernetes, liveness and readiness probes tell the scheduler when to route traffic and when to restart. Avoid using GET / as a health endpoint; create a lightweight /healthz that verifies dependencies quickly without doing your app’s taxes.

Dockerfile
HEALTHCHECK –interval=10s –timeout=2s –retries=5 CMD [“/app”, “healthcheck”]

k8s snippet
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 3
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 3

Exit codes matter too. If your app detects an unrecoverable state, crash and let the orchestrator do its job. Don’t implement a homemade process supervisor inside the container; one process, one responsibility. Logging to stdout/stderr keeps life simple—collectors can scrape and route without sidecars that add footguns. Finally, label images with build metadata (commit SHA, build time) so operators can tie incidents to exact artifacts. It’s not glamorous, but a good healthcheck and a real readiness gate will save more headaches than a hundred Grafana dashboards staring into the void.

Ship With Receipts: SBOMs, Provenance, And Policy Gates
Supply chain talk can get tinfoil-hat quickly, but there’s a practical middle ground: ship with receipts. We want to answer “What’s in this image? Who built it? Did anything change?” without a war room. SBOMs cover “what,” provenance covers “who/when/how,” and policy gates stop risky artifacts from deploying. Modern builders can generate SBOMs during docker build and attach attestations to images in registries, so the extra friction is minimal once wired in.

Start by making SBOM generation a non-optional step in CI. Store it next to the image or in the registry if supported. Next, create provenance attestations that tie the image digest to the source commit, the builder identity, and the build parameters. If you’re picking a framework, the SLSA levels are a reasonable scoreboard; we don’t have to hit level 4 to get real value. Then add a deployment gate that checks SBOMs for known-bad dependencies and blocks images missing attestations. The point isn’t to achieve eternal purity; it’s to prevent “oops, that was built from a developer laptop at 11 p.m.” from ever shipping.

We also like to keep diffs small and explainable. When we add a new base image or bump a major library, call it out in the PR description and paste the new image size. Those little habits give reviewers the context to ask good questions. If you want a cheat-sheet-friendly intro, Docker’s best practices page and the BuildKit docs cover the mechanics; the policy is on us to enforce. It’s one of the rare places where a tiny bit of red tape saves a big mess later.

Practical Odds And Ends We Wish We Knew Earlier
There are a few small moves that pay outsize dividends. First, layer order is king. Put the slowest-changing steps up top: dependency install before copying source, OS package updates before application code. The cache will love you, and your teammates will stop glaring at CI. Second, be careful with globs. COPY . . happily drags in node_modules or build directories if you’re not careful; maintain a real .dockerignore and treat it like code. Third, reduce network flakiness by pinning registries and using mirrors close to your runners. A flaky upstream adds more time than any micro-optimization.

We’ve also learned to keep test fixtures out of runtime images. Build stages can run tests and linting with all the comfy tools you want, then throw them away as the final stage strips down. When you really need to dive into a container’s guts, pull a debug variant locally or exec into a running pod with ephemeral tools rather than shipping them in prod. Finally, document what “good” looks like: target image sizes, expected build times, cache hit ratios. When someone opens a PR that turns a 70 MB image into a 1.2 GB behemoth, the review tools should make that obvious without a scavenger hunt. If you want further reading on image and container hygiene, the OWASP sheet we mentioned and the Dockerfile best practices cover the basics without drowning you in acronyms.

Cost And Capacity: The Quiet Wins Of Good Docker Hygiene
Docker discipline doesn’t just make engineers happy; it makes finance happy too. Smaller images cut egress costs and speed deployments, especially across regions. Faster builds reduce runner minutes and unblock developers sooner, which snowballs into more releases and less “batching” of risky changes. Lean runtime images mean fewer CVEs to triage, and that time doesn’t come back once it’s gone. When the root filesystem is read-only and the app runs as non-root, we see fewer noisy incidents where a log directory filled up or a temp file went rogue.

Cluster-level capacity also improves. Shorter pull times mean faster scaling events and less churn during rollouts. When we pin images by digest and probe readiness properly, we get cleaner blue-green or canary behavior with fewer partial outages. We’ve watched teams shave minutes off deploys and days off incident investigations with nothing more exotic than careful Dockerfiles, digest pinning, and healthchecks. That’s the kind of return on effort we want: simple changes that keep paying rent.

If you need a reference for image handling on the platform side, the Kubernetes images and pulling guidance is a gold standard. It demystifies the pull-through cache and informs sane defaults for imagePullPolicy. Armed with that and a modest SBOM/provenance setup via SLSA, we can deliver containers that are fast to build, cheap to move, safe to run, and dull in all the best ways. Let’s normalize that bar as table stakes and save our excitement for things that actually deserve it—like release nights that end before dinner.

Share