Compliance Without Chaos In Modern Delivery
Practical ways to keep auditors calm and teams shipping
Why Compliance Feels Harder Than It Should
We’ve all seen it happen: a team is moving quickly, customers are happy, and then compliance walks into the room carrying a spreadsheet large enough to blot out the sun. Suddenly, every change needs evidence, every access path needs a reason, and everyone starts using phrases like “control objective” as if that’s a normal way to spend a Tuesday.
The awkward truth is that compliance itself usually isn’t the real problem. The real problem is bolting it on after the fact. When controls live in wikis, approvals live in inboxes, and evidence lives in someone’s memory, things get messy fast. We end up doing heroic manual work before an audit, which is a bit like cleaning the whole house because a friend said they “might pop by”.
A healthier approach is to treat compliance as part of how we build and run systems, not as a separate activity that appears every quarter. If our delivery workflows already capture who changed what, when it changed, how it was reviewed, and whether it passed policy checks, then many compliance needs stop feeling dramatic. We’re not creating extra work so much as shaping existing work in a way that leaves a trail.
This matters whether we’re dealing with SOC 2, ISO 27001, or sector rules from places like PCI SSC. Different frameworks use different language, but they often ask familiar questions: do we control access, do we manage change, do we protect data, and can we prove it? Once we notice that pattern, compliance stops looking mystical and starts looking operational.
Build Controls Into The Delivery Path
If we want compliance to stop interrupting delivery, we need controls in the path of delivery itself. Not around it. Not in a side document. In the actual path. That means code review, automated testing, approval rules, deployment restrictions, and artifact traceability should all happen in the same systems engineers already use.
This is where teams often overcomplicate things. We don’t need a grand compliance platform before we can act like adults. We need a few clear gates and a reliable audit trail. For example, every production change should map to a pull request, every pull request should show reviews, and every deploy should be tied to a known artifact built by CI. If someone asks, “Who approved this change?” the answer should come from the system, not from Steve, who is on holiday and never reads Slack anyway.
A simple branch protection and review policy goes a long way:
# .github/workflows/policy-check.yml
name: policy-check
on:
pull_request:
branches: [ main ]
jobs:
verify:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check for ticket reference
run: |
grep -E 'PROJ-[0-9]+' <<< "${{ github.event.pull_request.title }}"
- name: Require Terraform formatting
run: |
terraform fmt -check -recursive
Pair that with repository settings requiring two reviewers and blocking direct pushes to main, and we’ve turned a hand-wavy process into an enforceable one. The nice part is that this helps engineering too. Fewer surprise changes, clearer ownership, and less detective work after incidents.
For broader guidance, we like leaning on NIST’s secure software development guidance because it translates nicely into everyday engineering habits.
Evidence Collection Should Be Boring
A good compliance program makes evidence collection painfully boring. That’s the goal. If gathering proof still feels like a scavenger hunt across Jira, cloud consoles, ticketing systems, and screenshots named final-final-v2.png, we’ve missed the point.
Auditors usually want evidence that controls exist and are followed over time. We can make that easier by defining where each type of evidence comes from and avoiding manual exports wherever possible. CI logs prove tests ran. Git history proves reviews happened. IAM reports prove access was granted through a process. Monitoring systems show alert coverage and incident timelines. None of this is glamorous, but then neither is plumbing, and we’re quite fond of plumbing when it works.
One practical trick is maintaining an evidence map. For each control, list the system of record, the owner, and the retrieval method. That keeps the work repeatable and stops last-minute improvisation. We’ve found that even a simple spreadsheet can help at first, though eventually teams do better when evidence collection is scripted or pulled through APIs.
A basic evidence inventory might look like this:
controls:
- id: CC6.1
description: Logical access to production is restricted
source_of_truth: Okta
owner: Platform Team
retrieval: "Monthly access export via API"
- id: CC8.1
description: Changes are authorized, tested, and approved
source_of_truth: GitHub Actions + Pull Requests
owner: Engineering Enablement
retrieval: "PR metadata and workflow logs"
The point isn’t to build a museum of screenshots. It’s to create a small, dependable machine for proving what we already do. If we do that well, audits become less like oral exams and more like routine document checks.
Access Control Is Where Good Intentions Go To Die
If there’s one area where compliance gets painfully real, it’s access control. Most teams start with noble intentions: least privilege, proper approvals, no shared accounts. Then reality arrives with emergency fixes, contractor onboarding, forgotten service users, and one ancient admin role that “only a few people” have. You can probably guess how that story ends.
The fix isn’t perfection. It’s discipline. We need clear joiner-mover-leaver processes, short-lived privileges where possible, and regular access reviews that someone actually completes. Access should be granted through groups, not one-off exceptions. Human access to production should be limited, logged, and challenged by default. Service accounts should have owners. Yes, all of them. Even the weird one nobody remembers creating in 2021.
Cloud platforms make this easier if we use them properly. For example, role-based access in infrastructure code gives us a versioned record of who should have access and why:
resource "aws_iam_role" "readonly_ops" {
name = "readonly-ops"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
AWS = "arn:aws:iam::123456789012:root"
}
}]
})
}
resource "aws_iam_policy" "readonly_policy" {
name = "readonly-ops-policy"
policy = file("policies/readonly.json")
}
That doesn’t solve governance on its own, of course. We still need review cadence, approval workflows, and identity provider integration. But codifying access removes a lot of ambiguity, which is where compliance issues love to hide.
For baseline guidance, the CIS Controls are still useful because they stay close to practical security hygiene instead of floating off into abstract paperwork.
Policy As Code Beats Policy As Folklore
Every organisation says it has policies. Fewer can show how those policies are enforced in running systems. That gap is where compliance becomes folklore: everyone vaguely believes there’s a rule, nobody can quite find it, and enforcement depends on whether the right person is awake.
We can do better by moving important rules closer to code. Policy as code won’t replace every written policy, but it does turn key requirements into checks that run continuously. Want to ensure storage is encrypted, public ingress is restricted, or mandatory tags are set? Great. Let’s write rules and run them in CI, or directly against deployed resources.
For infrastructure, tools like Open Policy Agent work well because they let us express rules without hand-waving:
package compliance
deny[msg] {
input.resource_type == "aws_s3_bucket"
not input.config.server_side_encryption
msg := "S3 bucket must have server-side encryption enabled"
}
deny[msg] {
input.resource_type == "aws_security_group"
input.config.ingress[_].cidr == "0.0.0.0/0"
input.config.ingress[_].port == 22
msg := "SSH must not be open to the world"
}
Now, instead of reminding people for the hundredth time not to expose SSH to the internet, we can simply fail the build. It’s calmer for everyone. The machine becomes the bad cop, and we keep our reputation as moderately pleasant colleagues.
There’s also a nice side effect here: policy as code creates evidence by default. If a rule runs on every change and blocks non-compliant configurations, we have a record of enforcement, not just a sentence in a handbook. For many teams, that’s the turning point where compliance starts becoming measurable rather than aspirational.
Continuous Monitoring Keeps Small Problems Small
A lot of compliance pain comes from discovering issues long after they’ve become habits. A stale account sits around for months. Backups fail quietly. A logging setting gets changed during a rushed deploy and nobody notices. By the time the audit or incident review arrives, we’re reconstructing history like amateur archaeologists.
Continuous monitoring is how we avoid that. We’re not trying to watch everything with equal intensity. We’re trying to monitor the controls that matter most and alert when they drift. That means checking privileged access, log retention, vulnerability status, encryption settings, backup success, and deployment activity. If a control can silently fail, it deserves a monitor.
This is also where teams should be careful not to confuse dashboards with assurance. A dashboard full of green circles is comforting, but it only helps if the underlying checks are meaningful and someone owns the response. A weekly access review alert that nobody reads is basically decorative art.
We like to define a handful of compliance-critical signals and route them like operational alerts. For example: “production admin role assigned outside break-glass process”, “critical vulnerability older than SLA”, or “audit logs disabled on a managed service”. These aren’t vanity metrics; they’re control health checks.
Useful reference material from NIST and cloud provider security centres can help shape the baseline, but the key is local ownership. Every control should have an owner, an expected state, and a response path. Once we do that, compliance monitoring stops being a monthly report and becomes part of normal service operations.
Audits Go Better When We Prepare Like Engineers
Nobody enjoys audits, but they become far less painful when we prepare for them the way we prepare for releases: with repeatable checklists, defined owners, and dry runs. The teams that suffer most are usually the ones treating every audit like a fresh existential crisis. We don’t recommend that approach. It’s tiring and terrible for morale.
A better model is to keep an audit-ready pack in light maintenance all year. That includes current policies, system inventories, architecture diagrams, access review records, incident summaries, vendor lists, and mapped evidence for each control. None of this needs to be fancy. It just needs to be current enough that we’re not inventing the company from scratch every time an auditor asks a question.
We’ve also found it useful to run internal mock audits. Pick a few controls, ask for evidence, and see how long it takes to produce. If the answer is “three people, six screenshots, and a prayer,” we’ve learned something valuable before an external auditor learns it for us. These rehearsals often expose weak ownership or hidden manual steps that looked fine on paper but collapse under time pressure.
One more thing: honesty beats theatre. If a control isn’t mature yet, say so, explain the compensating measures, and show the plan. Auditors are generally more comfortable with known gaps being actively managed than with polished stories that fall apart on inspection. Compliance isn’t a school play. We don’t need perfect props; we need credible operations.
Make Compliance A Team Sport, Not A Side Quest
The most sustainable compliance programs are social as much as technical. Tools matter, automation matters, evidence matters, but none of it sticks if compliance is seen as somebody else’s problem. When security owns the policies, platform owns the guardrails, engineering owns change quality, and managers own review cadence, things start to click. Shared responsibility may be a cliché, but unlike many clichés, this one actually pays rent.
That doesn’t mean everyone needs to become an auditor. It means each team should understand the few controls they directly influence and the evidence they generate. Developers should know why ticket references and peer review matter. Platform engineers should know why immutable logs and access boundaries matter. Managers should know why timely access reviews and onboarding records matter. When people understand the “why,” the process usually gets less brittle.
We should also be careful with tone. If compliance is introduced as a punishment system, people will route around it. If it’s framed as operational discipline that protects customers, simplifies audits, and reduces firefighting, adoption gets easier. Nobody loves extra process, but most teams do appreciate fewer surprises and cleaner systems.
In the end, compliance works best when it fades into the background. That’s not a glamorous finish, but it’s the right one. We want a delivery system where controls are normal, evidence is automatic, and audits are mildly inconvenient rather than spiritually exhausting. If we can get there, we’re doing very well indeed.


