Helm Charts Without Tears: Our Practical Team Playbook

helm

Helm Charts Without Tears: Our Practical Team Playbook

We’ll ship Kubernetes apps faster, with fewer midnight surprises.

Why We Keep Coming Back To helm

We’ve all been there: Kubernetes manifests multiplying like rabbits, copy-pasted across environments, and a tiny change turning into a PR that touches 27 YAML files. That’s where helm earns its keep. At its best, helm is a packaging and templating tool that helps us standardise deployments, parameterise configuration, and version releases like we mean it.

The big win isn’t “templating YAML” (though, yes, it does that). The win is repeatability. A chart becomes the unit of delivery: same chart, different values files, predictable outcomes. That’s handy when we’re moving from dev → staging → prod and trying not to accidentally turn on “debug: true” in prod because someone forgot to delete a line. We also get release history, rollbacks, and a clean way to track what’s running.

That said, helm can also be… creative. Templates can get unreadable, values can become a junk drawer, and debugging can feel like arguing with a printer. Our goal in this playbook is simple: use helm for what it’s good at, avoid the traps, and keep charts boring—in the best possible way.

If you want the official docs as a reference, they’re solid: Helm Documentation. We’ll stick to the practical bits we’ve learned from shipping real workloads, and yes, from breaking things so you don’t have to.

Our Chart Structure: Keep It Boring And Predictable

A chart that’s easy to navigate is a chart that survives contact with a team. We aim for “someone new can find the Service in 10 seconds.” The default chart structure is fine, but we add a few conventions.

First, we keep templates small and single-purpose. Giant deployment.yaml files stuffed with conditionals tend to turn into folklore (“Only Priya understands why that if-statement exists”). Instead, we split by resource type and occasionally by feature: deployment.yaml, service.yaml, ingress.yaml, hpa.yaml, pdb.yaml, serviceaccount.yaml.

Second, we treat values.yaml as documentation, not just defaults. If a value matters, it gets a comment. If it’s complicated, we include an example. We also prefer making “safe defaults” the default: sane resource requests, non-root security context, and liveness/readiness probes where appropriate.

Third, we keep helpers in _helpers.tpl and don’t let them become a second programming language. Helpers should format names/labels, not implement business logic.

Finally, we version charts deliberately. SemVer works: bump patch for template tweaks, minor for new values/features, major for breaking changes. If we publish charts internally, we store them in an OCI registry so we’re not babysitting a chart museum. Helm supports OCI nicely now, and registries are what we already operate.

For reference on chart layout and conventions, the docs are a good backstop: Chart Best Practices.

Values Strategy: Stop Turning values.yaml Into A Junk Drawer

values.yaml is where good intentions go to… accumulate. We’ve seen charts where values files become unsearchable and every environment has a bespoke override that nobody trusts. Our rule: values should be predictable, grouped, and minimised.

We start with a consistent top-level structure across charts:

  • image: repository/tag/pullPolicy
  • service: type/port
  • ingress: enabled/className/hosts/tls
  • resources: requests/limits
  • securityContext / podSecurityContext
  • env and envFrom (carefully)
  • autoscaling if relevant

Then we decide what belongs in values versus templates. If it’s environment-specific (domain name, replica count, feature toggle), it’s a value. If it’s structural (how labels are constructed, which selectors match), it’s template logic—ideally minimal.

We also avoid “values sprawl” by deleting dead values. If the chart no longer uses it, remove it. Leaving it around “just in case” is how you end up with three ways to configure the same thing and none of them correct.

One more thing: we don’t store secrets in values files. Not even in “private repos.” Use external secret managers and sync them in. If you need a Kubernetes-native approach, tools like External Secrets exist, but whatever we choose, the principle stands: helm renders manifests; secrets should come from secret systems.

For background on values and overrides, see: Helm Values.

Template Hygiene: Less Logic, More Readability (With Examples)

Helm templates use Go templating, which is powerful enough to let us make bad decisions quickly. Our template hygiene rules are simple:

  1. Prefer clarity over cleverness.
  2. Use include helpers for names/labels, not branching logic.
  3. Fail fast when required values are missing.
  4. Indentation is sacred. YAML is petty and unforgiving.

Here’s a small, readable snippet we use often: required image tag, consistent labels, and optional annotations.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "myapp.fullname" . }}
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount | default 2 }}
  selector:
    matchLabels:
      {{- include "myapp.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "myapp.selectorLabels" . | nindent 8 }}
      annotations:
        {{- with .Values.podAnnotations }}
        {{- toYaml . | nindent 8 }}
        {{- end }}
    spec:
      containers:
        - name: myapp
          image: "{{ .Values.image.repository }}:{{ required "image.tag is required" .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy | default "IfNotPresent" }}

That required call saves us from deploying “latest” by accident (or worse, deploying nothing and wondering why pods won’t start).

For debugging templates, we lean on helm template and helm lint before anything touches a cluster. And when something looks off, we render with the same values file the pipeline uses—no guessing.

If you want to go deeper on templating functions and patterns, this is the canonical guide: Template Guide.

Releases And Environments: One Chart, Many Personalities

We want one chart per app (most of the time) and multiple values files per environment. The trick is to avoid drifting into “dev is a different app than prod.” We keep the core deployment shape consistent: same ports, same probes, same resources class—just different sizes and external wiring.

A pattern that’s worked well:

  • values.yaml = sane defaults, local-friendly
  • values-dev.yaml = dev overrides
  • values-staging.yaml
  • values-prod.yaml

Then we standardise commands and naming. We typically name releases after the app and environment, and keep namespaces aligned:

  • Namespace: myapp-dev
  • Release: myapp

Or if namespaces are shared, invert it:

  • Namespace: apps
  • Release: myapp-dev

The point is: choose a convention, document it, and enforce it.

We also care about upgrade safety. We run helm upgrade --install with --atomic in production-ish environments. If the upgrade fails, helm rolls back automatically. It’s not magic, but it beats “half-upgraded and now it’s Friday.”

Rollbacks are also a reason we keep values files in Git and pinned. If we can’t reproduce what we installed, we can’t reliably roll it back. Helm’s release history helps, but our Git history is what we trust most when we’re stressed and caffeinated.

For release behaviour details, this is handy: helm upgrade.

CI/CD With helm: Make The Pipeline Do The Boring Work

We don’t want “works on my laptop” to be an accepted deployment strategy. Our pipeline should run the boring checks, render manifests, and block obviously broken releases.

A minimal pipeline flow we like:

  1. helm lint
  2. helm template (render)
  3. YAML validation (kubeconform or similar)
  4. Policy checks if you have them (OPA/Gatekeeper-style)
  5. Deploy with helm upgrade --install

Here’s a lightweight example script we’ve used in CI. It’s not fancy, but it’s dependable:

set -euo pipefail

CHART_DIR=./charts/myapp
VALUES=./env/values-prod.yaml
NAMESPACE=myapp-prod
RELEASE=myapp

helm lint "$CHART_DIR"

helm template "$RELEASE" "$CHART_DIR" \
  --namespace "$NAMESPACE" \
  -f "$VALUES" > rendered.yaml

# Optional: validate rendered output before applying
# kubeconform -strict -summary rendered.yaml

helm upgrade --install "$RELEASE" "$CHART_DIR" \
  --namespace "$NAMESPACE" \
  --create-namespace \
  -f "$VALUES" \
  --atomic \
  --timeout 5m

We also pin chart dependencies and avoid pulling “whatever is latest” at build time. If dependencies update, we want a PR for it. Surprises belong in birthday parties, not deployments.

If you’re using GitOps, helm can still fit. Many teams run helm rendering inside GitOps controllers or commit rendered manifests. Either way, the principle is the same: deterministic inputs, reviewable outputs, repeatable deployments.

Debugging And Operations: When helm Misbehaves

When something fails, our first instinct is not to re-run the pipeline three times and hope Kubernetes “calms down.” We troubleshoot methodically.

Our go-to commands:

  • helm status <release> -n <ns>: what helm thinks happened
  • helm get values <release> -n <ns>: what values were applied
  • helm get manifest <release> -n <ns>: what was actually installed
  • helm history <release> -n <ns>: what changed and when

We also render locally with the exact same inputs:

  • helm template ... -f values-prod.yaml to reproduce the YAML
  • compare it against the live manifest if needed

A common failure mode is “resources created but pods not ready.” Helm reports a timeout, and folks assume helm is broken. Usually the workload is broken (image pull, readiness probe, missing Secret). So we also jump to Kubernetes-level checks:

  • kubectl describe pod
  • kubectl logs
  • kubectl get events

We’re also careful with hooks. Hooks can be useful for migrations or pre-install tasks, but they can also create confusing state if they fail or are re-run unexpectedly. If we use hooks, we document them and ensure they’re idempotent. Nothing like a “pre-upgrade job” that drops a table twice to make everyone friends again.

Finally, we keep an eye on drift. If someone kubectl edits resources that helm manages, helm will eventually fight back. We set expectations: helm-managed resources shouldn’t be manually edited. If a hotfix is needed, it becomes a chart change right after.

Share