Ship Faster: 97% Compliance Without Slowing Devs

compliance

Ship Faster: 97% Compliance Without Slowing Devs
Real guardrails, not red tape, that keep auditors happy and engineers shipping.

Why Compliance Feels Slow—and How We Fix It

Compliance gets a bad rap because it’s usually bolted on after systems are built. We write code, provision infra, release features, and then someone appears with a spreadsheet and a frown. That’s the moment we invent “process,” which too often means manual checklists, tribal knowledge, and last‑minute fire drills. No surprise it feels like we’re dragging a piano up a staircase. The problem isn’t the controls themselves. The problem is timing, proof, and ownership. If no one owns a control, and proof arrives only at audit time, we’re guaranteed pain. So let’s flip the model: controls belong to product teams, evidence is produced by pipelines, and audits become observation, not excavation.

Our north star is simple: make the right thing the fast thing. That means turning policy into code that’s tested like any other code, surfacing compliance drift in the same dashboards we use for availability, and shrinking feedback cycles from months to minutes. We still map to frameworks—HIPAA, ISO 27001, SOC 2, PCI, or the evergreen controls catalog in NIST SP 800-53—but we don’t make engineers read PDFs for fun. We translate requirements into testable rules: encryption on, logging retained, least privilege enforced, supply chain signed, incident playbooks rehearsed.

None of this works without a culture change: security and compliance folks pair with dev and ops from day one. We agree on a control catalog, publish the tests, and wire them into CI. Then we measure compliance as a first-class SLO. When we do this well, “compliant” stops being a milestone and becomes a property of every build.

Codifying Controls: Turn Policies Into Tests

Policies in docs are hope; policies in code are certainty. We want controls defined as tests that block unsafe changes and produce durable evidence. Two patterns carry us: runtime tests (e.g., configuration checks against live environments) and pre-merge tests (e.g., scanning IaC and container images). We keep them in version control, require code review, and run them in CI.

For infrastructure and applications, we like compliance profiles that read like unit tests. Here’s a tiny example using InSpec to enforce encryption and logging on an S3 bucket:

control 's3-001-encryption' do
  title 'S3 buckets must enforce encryption'
  aws_s3_buckets.bucket_names.each do |b|
    describe aws_s3_bucket(bucket_name: b) do
      it { should have_default_encryption_enabled }
    end
  end
end

control 's3-002-access-logging' do
  title 'S3 buckets must have server access logging'
  aws_s3_buckets.bucket_names.each do |b|
    describe aws_s3_bucket(bucket_name: b) do
      it { should have_logging_enabled }
    end
  end
end

We organize these controls by domain (identity, network, data, logging) and align them to the framework we care about. Engineers run them locally, then CI runs them on PRs and against staging/prod. The test output becomes evidence—timestamped, versioned, and linkable.

We also capture policy for platform resources (Kubernetes, Terraform) with admission policies and static analysis. You’ll see Rego, constraint frameworks, or YAML-based policies that are readable by humans and enforced by machines. The trick is to keep rules small and composable, annotate them with business context, and deprecate them when we improve. No one reads a 700-line policy. Everyone understands “deny public S3 unless ticket ABC-123 is referenced.”

And because tests are code, we lint them, peer review them, and release them with semantic versioning—compliance that actually ships.

Pipelines That Prove It: Evidence as a Service

If it isn’t automatic, it’s not evidence; it’s folklore. We turn our CI/CD into an evidence factory that signs artifacts, attests to how they were built, and publishes control results. Think of it as “compliance by construction.” Every pipeline run does the same things, in the same order, and leaves breadcrumbs an auditor can follow without summoning us to a call.

A minimal GitHub Actions example that runs compliance tests, generates an SBOM, signs the image, and publishes attestations:

name: build-and-prove
on:
  push:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build container
        run: docker build -t ghcr.io/acme/app:${{ github.sha }} .
      - name: SBOM
        run: syft ghcr.io/acme/app:${{ github.sha }} -o json > sbom.json
      - name: Compliance tests
        run: inspec exec profiles/acme-baseline --reporter json:inspec.json
      - name: Sign image
        run: cosign sign --key ${{ secrets.COSIGN_KEY }} ghcr.io/acme/app:${{ github.sha }}
      - name: Upload evidence
        uses: actions/upload-artifact@v4
        with:
          name: evidence-${{ github.run_id }}
          path: |
            sbom.json
            inspec.json

We use short, boring steps that always run, even if a previous step fails, so we capture failure evidence. We sign images and attestations with tools like Sigstore Cosign, and we store outputs in a write-once location (artifact store, bucket with retention locks). Each artifact is traceable to a commit, a pipeline run, and a person (or service account) who approved it.

Finally, we wire pipeline results into our ticketing system. A release ticket automatically updates with links to SBOMs, test results, and signatures. Auditors don’t need to email us; the ticket is the paper trail.

Least Privilege Without Tears: IAM That Scales

Least privilege is where good intentions go to die, usually in a pile of JSON. We keep it manageable by standardizing roles, scoping access by environment, and granting temporary, just-in-time privileges. Humans get broad read access and narrow write access; workloads get even narrower. We also annotate policies with tags and ownership so stale permissions are easy to find and prune.

Here’s a tiny AWS IAM example that allows read-only access to one bucket and denies public ACLs everywhere, with an ownership tag to help audits:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ReadOnlySpecificBucket",
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:ListBucket"],
      "Resource": [
        "arn:aws:s3:::acme-team-logs",
        "arn:aws:s3:::acme-team-logs/*"
      ],
      "Condition": {
        "StringEquals": { "aws:PrincipalTag/owner": "platform" }
      }
    },
    {
      "Sid": "DenyPublicACLs",
      "Effect": "Deny",
      "Action": "s3:PutBucketAcl",
      "Resource": "*",
      "Condition": {
        "StringEquals": { "s3:x-amz-acl": "public-read" }
      }
    }
  ]
}

We provision identities and roles with Terraform or CloudFormation modules that default to “no,” then expose safe, narrow outputs for app teams. When someone needs elevated rights, they request a role with a TTL; the approval and session event go to logs automatically. That’s not only safer, it’s excellent evidence for auditors.

Cloud providers publish solid patterns. We’ve found the Identity and Security pillars in the AWS Well-Architected guidance are both pragmatic and easy to translate into controls. The key is avoiding bespoke snowflakes. We’d rather have three boring roles that cover 95% of use cases and a review process for the remaining 5% than a thousand handcrafted policies no one understands.

Data Boundaries: Encryption, Retention, and ‘Who Touched What’

Data controls are where frameworks converge: encrypt at rest, encrypt in transit, retain what you need (and not a byte more), and know exactly who accessed sensitive data. We make these defaults, not preferences. Storage modules create KMS-backed resources with customer-managed keys. Services must prove TLS is enforced. Database connections require IAM or short-lived credentials. Our CI checks this, and our runtime monitoring verifies it keeps working when no one’s looking.

Retention and deletion are boring until they’re very exciting. We encode retention periods in infrastructure code and tag data sets with classification levels. Backups inherit those tags. Lifecycle policies delete on schedule, and those deletions are logged. When a regulation changes or a customer requests erasure, we update config, run the change through review, and the same automation that created the data deletes it—with an audit trail.

Access transparency closes the loop. Every read of a sensitive table or bucket must be attributable to a human or workload identity, with purpose and ticket reference if required. We don’t build a bespoke panopticon; we enable existing logs (cloud audit logs, database audit, proxy logs), centralize them, and attach identity context. Then we alert on oddities: new principals touching sensitive data, spikes in reads, cross-region access. This is compliance and security saying the same thing: show your work, and notice when the work is weird. No heroes, just telemetry.

Kubernetes That Audits Itself: Admission, Drift, and SBOMs

Kubernetes gives us enough rope to knit a sweater or tie ourselves in knots. We prefer sweaters. Two layers help: admission control to block bad configs before they land, and drift detection to catch what slips through. We also tie images to attestations and SBOMs so clusters run only what we can prove we built.

Admission policies stop the classics: privileged pods, hostPath mounts, missing resource limits, no network policies. A simple Kyverno rule to require non-root and resource limits:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: safe-pods
spec:
  validationFailureAction: enforce
  rules:
    - name: require-non-root-and-limits
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Pods must run as non-root and set resource limits."
        pattern:
          spec:
            securityContext:
              runAsNonRoot: true
            containers:
              - resources:
                  limits:
                    memory: "?*"
                    cpu: "?*"

We like tools that are explicit and visible in Git. Policies live with platform code, get code-reviewed, and are versioned like anything else. Kyverno’s policy language is approachable, and its docs are solid: Kyverno.

Next, drift. We regularly compare desired state (Git) to actual state (cluster) and open pull requests when they diverge. We treat drift as a bug, not a vibe. Finally, we verify supply chain: the cluster only admits images signed by our key and, optionally, with SBOMs present and vulnerability thresholds met. Cosign admission controllers pair nicely with this model. When we demo to auditors, we show a forbidden deployment, the admission error, and the evidence from last week’s successful deploy. Clusters that say “no” save us from awkward yeses later.

Prepare for the Audit: Dry Runs, Playbooks, and Metrics

Audits go smoother when they’re routine. We run quarterly “mini-audits” where we pretend to be auditors and ask for five things we should be able to produce in under five minutes: access reviews, evidence of encryption, last incident postmortem, sample deployment attestation, and a screenshot of drift alerts. We time ourselves. If something takes too long, we fix the pipeline or the documentation—not the stopwatch.

We keep a lightweight playbook with links, not prose. It lists control families, who owns them, where the tests live, and where evidence lands. Ticket templates include the links for each release and change. That way, when someone asks “prove your backups restore,” we paste a link to the last restore test job and its logs. It’s not flashy, but it’s trustworthy.

Metrics help us stay honest. We track:
– Percent of builds with passing compliance tests.
– Time to remediate a failing control.
– Percentage of identities with MFA and hardware-backed keys.
– Number of policies with owners and SLAs.
– Drift incidents per environment, per month.

We publish these alongside uptime and performance. Compliance isn’t a separate dashboard in a locked room; it’s part of how we run the system. When the annual audit arrives, we don’t scramble. We pull up our “evidence as a service” bucket, walk through the pipeline, and let the artifacts speak. An auditor once told us, “You made my job boring.” We took it as a compliment of the highest order.

Share