
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:media="http://search.yahoo.com/mrss/"
>
  <channel>
    <atom:link href="https://maxdaten.io/rss.xml" rel="self" type="application/rss+xml" />
    <title>Jan-Philip Loos | maxdaten.io</title>
    <link>https://maxdaten.io</link>
    <description>Full-stack product engineering with knowledge transfer built in. 15+ years spanning product development, platform architecture, and technical leadership — from startup to 100M+ requests/day.</description>
    
    
        <item>
          <guid>https://maxdaten.io/2026-01-31-ship-your-toolchain-not-just-infrastructure</guid>
          <title>Ship Your Toolchain, Not Just Infrastructure</title>
          <description>Platform teams ship infrastructure but not the CLI tools developers need daily. devenv turns your toolchain into a declarative, version-controlled environment.</description>
          <link>https://maxdaten.io/2026-01-31-ship-your-toolchain-not-just-infrastructure</link>
          <pubDate>Sat, 31 Jan 2026 12:00:00 GMT</pubDate>
          <dc:creator>Jan-Philip Loos</dc:creator>
          <category>Platform Engineering</category><category>Continuous Delivery</category><category>Devenv</category><category>Infrastructure As Code</category>
          <content:encoded><![CDATA[
            <div style="margin: 50px 0; font-style: italic;">
              If anything looks wrong,
              <strong>
                <a href="https://maxdaten.io/2026-01-31-ship-your-toolchain-not-just-infrastructure">
                  read on the site!
                </a>
              </strong>
            </div>
            <p><a href="https://tag-app-delivery.cncf.io/whitepapers/platforms/">Platform teams</a> deliver Terraform modules via registries or via direct repository reference, Kubernetes’ custom resources through the cluster, and internal UIs via deployment pipelines. But for their daily work, product teams need CLI tools and custom configurations. Those ship as wiki pages and Slack announcements.</p><blockquote>“Please update your kubectl.” ~ <em>In some Slack channel right now</em></blockquote><p><a href="https://devenv.sh">Devenv</a> closes that gap: it turns platform tooling into declarative, version-controlled environments that teams consume with a single command. It’s <a href="https://knolling.org/what-is-knolling">knolling</a>: by the platform team, for the product team.</p><p>The core problem is not only the delivery mechanism: Platform teams must also control which tool versions their consumers run: <code>kubectl</code>, the AWS CLI, OpenSSL. When the version is wrong the friction becomes expensive.</p><h2>OpenSSL: Version Drift in the Fields</h2><p>In a previous project, cluster access required hand-rolling certificate signing requests with OpenSSL and getting them signed by a Kubernetes admin. Scripts and docs existed, but the wrong OpenSSL version produced incompatible certs, even catched by checks in the scripts.</p><p>Another example is that a <code>kubectl</code> version can and will complain if the <a href="https://kubernetes.io/releases/version-skew-policy/">version skew</a> between the client and the server is too large.</p><h2>Devenv: Nix Without the Sharp Edges</h2><p>I discovered <a href="https://nixos.org">Nix</a> in 2015 while founding Briends GmbH. Nix provides more than just a deterministic development shell. It has a strong ecosystem for providing a reproducible environment, but it has some sharp edges and a steep learning curve.</p><p>Built on top of Nix, devenv provides a declarative and specialized way to define and distribute complete development environments, including all the tools, scripts, and configurations that teams need.</p><p>Devenv allows platform engineering teams to:</p><ul><li><strong>Version-lock all tools</strong>: Ensure every developer uses the same version of <code>kubectl</code>, <code>terraform</code>, <code>aws-cli</code>, or any other tool, eliminating version skew warnings or even breaking issues</li><li><strong>Ship custom scripts and tooling</strong>: Distribute platform-specific scripts, helpers, and automation alongside the standard tooling</li><li><strong>Provide reproducible environments</strong>: Guarantee that what works on one machine works on all machines, mostly regardless of the underlying operating system (Nix is still platform dependent because it provides true native executables)</li></ul><p>Unlike Docker-based development environments that require running containers, devenv integrates directly into the developer’s shell, providing a native experience while maintaining reproducibility. The tools are available in the PATH just as if they were installed globally on the system, but they’re actually scoped to the active shell, and version-controlled.</p><h2>Platform as a devenv Module</h2><p>Here’s a minimal platform environment:</p><pre><code>platform-env
├── devenv.nix
├── devenv.lock
└── modules
    ├── google-cloud.nix
    └── scripts
        └── gcp-costs.sh</code></pre><pre><code>{
  pkgs,
  lib,
  ...
}: {
  imports = [
    ./modules/google-cloud.nix # Parameterized optional configuration
  ];

  config = {
    packages = [ pkgs.k9s ]; # Always provided 
  };
}</code></pre><p>A devenv module can define <em>options</em> that <em>config</em>ure the effective devenv configuration, including specific packages and configuring services such as <a href="https://docs.cloud.google.com/sql/docs/postgres/sql-proxy">Cloud SQL Auth Proxy</a> or shell start-up tasks.</p><p>Below, we offer a <code>kubernetesNamespace</code> option for consumers, which will be used to set their namespace in their Kubernetes context on shell activation.</p><pre><code>{ pkgs, lib, config, ... }:
let
  cfg = config.google-cloud; # Alias 
  clusterName = &quot;your-platform-cluster&quot;;
  clusterRegion = &quot;europe-north1&quot;;
  clusterProjectId = &quot;gcpId01234&quot;;

  # Devenv provides a .devenv state directory as a persistence layer
  stateDirectory = &quot;${config.devenv.state}/google-cloud&quot;;
  # Isolate platform kubernetes configuration from user-scoped
  # Will be used get-credentials to store credentials and 
  # will be defined as the KUBECONFIG env below
  kubernetesConfig = &quot;${stateDirectory}/kubeconfig.yaml&quot;;
in {
  # OPTIONS: What can be configured (the API)
  options.google-cloud = {
    enable = lib.mkEnableOption &quot;google-cloud&quot;;
    kubernetesNamespace = lib.mkOption {
      type = lib.types.str;
      description = &quot;Namespace of the consumer&quot;;
    };
  };

  # CONFIG: What happens when enabled
  config = lib.mkIf cfg.enable {
    packages = [
      pkgs.google-cloud-sdk
      pkgs.kubectl
       # Manages kubernetes auth with gcloud auth login credentials
      pkgs.gke-gcloud-auth-plugin
      # Additional tools you might use with your cluster
      pkgs.google-cloud-sql-proxy
      pkgs.kustomize
      pkgs.cmctl
      pkgs.helm
    ];

    env = { 
      USE_GKE_GCLOUD_AUTH_PLUGIN = &quot;true&quot;;
      KUBECONFIG = kubernetesConfig;
    };

    tasks.&quot;google-cloud:get-kubernetes-credentials&quot; = {
      # gcloud will store credentials in KUBECONFIG
      # but `env` definition has no effect until devenv:enterShell
      exec = &apos;&apos;
        export KUBECONFIG=${kubernetesConfig}
        gcloud container clusters \
          get-credentials ${clusterName} \
          --region ${clusterRegion} \
          --project ${clusterProjectId}
        kubectl config set-context --current --namespace=${cfg.kubernetesNamespace}
      &apos;&apos;;
      # Execute Task before being dropped into shell
      before = [
        &quot;devenv:enterShell&quot;
      ];
    };

    # Include script for all consumers
    scripts.gcp-costs-analyzer.exec = ./scripts/gcp-costs.sh;
  };
}</code></pre><p>Additionally, this module provides essential tools like <code>kubectl</code>, whose version <code>devenv.lock</code> pins. The <code>gke-gcloud-auth-plugin</code> is particularly valuable; it provides IAM-based auth to the kubernetes cluster with zero friction.</p><h2>Consumption</h2><p>Consuming teams install just two things: Nix and devenv. They don’t even need to understand the Nix language used by devenv. They can check out your infrastructure environment repository and drop their workspaces into this environment. If they decided to use devenv for their projects as well, they can compose the platform devenv with theirs.</p><h3>Model A: Zero Nix Knowledge</h3><p>In this model, developers simply clone the platform repository and invoke devenv with command-line options. Consumers don&#x27;t need to write any Nix code or even understand the module system, they just pass configuration values as flags:</p><pre><code># Clone the platform devenv
git clone git@github.com:your-org/platform-devenv.git
cd platform-devenv

devenv shell \
  --option google-cloud.enable:bool true \
  --option google-cloud.kubernetesNamespace:string &quot;aperture-science&quot;
# Or auto-activate with a direnv setup (out of scope, see References)

# Done. All tools available.
kubectl version
helm version</code></pre><p>Now they can drop their project directories into <code>platform-devenv/workspace/</code>, which you can add to the <code>platform-devenv/.gitignore</code>. All their custom tools can blindly assume that the tools required and provided by the platform are available. No more shell checks required in this regard.</p><p>To even shorten the <code>devenv shell</code> invocation, you can use the <a href="https://devenv.sh/integrations/dotenv/">.env integration</a>.</p><h3>Model B: Integrated into Consumer’s Devenv Definition</h3><p>For teams already using devenv, they can import your modules into their own setup:</p><pre><code>inputs:
  platform-devenv:
    url: github:your-org/platform-devenv # SSH access has to be configured
    flake: false  # Import as source, not as flake
imports:
  - platform-devenv  # Imports all modules</code></pre><p>Then configure in their <code>devenv.nix</code>:</p><pre><code>{
  google-cloud = {
    enable = true;
    kubernetesNamespace = &quot;aperture-science&quot;;
  };
}</code></pre><p>This simplifies the <code>devenv shell</code> invocation and is the preferred way for many configurable options.</p><h2>Caveats</h2><p>Several trade-offs come with this approach:</p><p><strong>Learning Curve for Platform Teams</strong>: While consumers don’t need deep Nix knowledge, the platform team maintaining the <code>devenv.nix</code> modules needs to understand the Nix language and devenv’s module system. Debugging can be challenging in Nix.</p><p><strong>Initial Setup Overhead</strong>: Developers need to install Nix and devenv on their machines, which can be a hurdle in organizations with strict security policies or locked-down systems.</p><p><strong>Build Performance</strong>: The first time a developer enters a devenv shell, Nix may need to download and build various dependencies, which can take significant time (1-20+ minutes) depending on the complexity of the environment. This can be mitigated through:</p><ul><li>Avoiding the bleeding edge rolling package channel (called unstable) of <a href="https://github.com/NixOS/nixpkgs">nixpkgs</a>/Devenv to utilize the official binary cache of the Nix ecosystem</li><li>Setting up an S3-compatible storage to store the binary cache that your organization controls, especially when using compile-intensive custom tools</li><li>Using caching services like Cachix (the company behind devenv)</li><li><em>Manually</em> transferring Nix stores via <a href="https://nix.dev/manual/nix/2.22/command-ref/nix-store/serve">SSH</a>, <a href="https://nix.dev/manual/nix/2.22/command-ref/nix-store/export">archive</a>, or other community tools to serve the binary cache</li></ul><p>For platform teams, investing in a shared binary cache is highly recommended to ensure developers aren’t repeatedly building the same packages.</p><h2>What Else Can You Ship?</h2><p>Ideas for what a platform team can ship beyond this example:</p><ul><li>Notify the user in the shell that a new <code>platform-devenv</code> version is available, or even auto-update</li><li>Integrate own security &amp; compliance checks as <a href="https://devenv.sh/git-hooks/">git hooks</a> (e.g., via <a href="https://trivy.dev">trivy</a>)</li><li>Onboarding scripts that automate account creation or guide through the manual setup (<a href="https://blog.danslimmon.com/2019/07/15/do-nothing-scripting-the-key-to-gradual-automation/">do-nothing script</a>)</li><li>Curate &amp; Inject coding agent skills by <a href="https://devenv.sh/creating-files/">creating those files</a> on shell invocation</li></ul><h2>Conclusion</h2><p>The reproducible, declarative environment pays off: one definition replaces dozens of wiki pages, and manual checklists and missed announcements.</p><p>This setup allows the platform engineering team to treat their platform tooling as deliverable, version-controlled software, which is consumable and configurable by other teams. The direct configuration in a <code>devenv.nix</code> doesn&#x27;t have as steep a learning curve as a custom Nix setup.</p><h2>References</h2><ul><li>Example for a full <a href="https://github.com/maxdaten-io/platform-devenv">platform-devenv</a> (Reference in time of <a href="https://github.com/maxdaten-io/platform-devenv/tree/4c14a0452e9477bb15b1efc6b44eb31fc8cade03">writing</a>)</li><li>Automatic Shell Activation via <a href="https://devenv.sh/automatic-shell-activation/">direnv</a> </li></ul>
          ]]></content:encoded>
          <media:thumbnail url="https://cdn.sanity.io/images/hvsy54ho/production/fc2da40091e36a989eb7a466539ab8147210524e-1408x768.jpg"/>
          <media:content medium="image" url="https://cdn.sanity.io/images/hvsy54ho/production/fc2da40091e36a989eb7a466539ab8147210524e-1408x768.jpg"/>
        </item>
      
        <item>
          <guid>https://maxdaten.io/2025-09-03-tdd-infrastructure-terragrunt</guid>
          <title>Test-Driven Infrastructure</title>
          <description>Bring TDD to your infrastructure. 
Use Terragrunt hooks and shell-native tests to catch failures early, boost confidence, and make every change safer.</description>
          <link>https://maxdaten.io/2025-09-03-tdd-infrastructure-terragrunt</link>
          <pubDate>Wed, 03 Sep 2025 20:00:21 GMT</pubDate>
          <dc:creator>Jan-Philip Loos</dc:creator>
          <category>Infrastructure As Code</category><category>Test Driven Development</category><category>Continuous Delivery</category><category>Design Pattern</category>
          <content:encoded><![CDATA[
            <div style="margin: 50px 0; font-style: italic;">
              If anything looks wrong,
              <strong>
                <a href="https://maxdaten.io/2025-09-03-tdd-infrastructure-terragrunt">
                  read on the site!
                </a>
              </strong>
            </div>
            <blockquote><p>Most teams ship infrastructure without tests. That’s like writing application code with no CI and hoping for the best. Infrastructure is critical, complex, and fragile—but too often it’s left unchecked.</p><p>With Test-Driven Development (TDD), we can flip the script. Instead of praying our Terraform and IAM rules “just work,” we define what good looks like, write tests, and let automation keep us safe.</p></blockquote><h2>Why Test-Driven Development for Infrastructure?</h2><p>In 15 years of building systems, I’ve never seen a project with comprehensive automated infrastructure tests. That gap is dangerous. Infrastructure touches everything: networking, IAM, deployments, storage. When something breaks, it often breaks catastrophically.</p><p>TDD forces us to ask &quot;what does good look like?&quot; before we change anything. The payoff:</p><p>Clear outcomes – we know what success means Fast feedback – catch issues in seconds, not hours Safe changes – refactor without fear Living documentation – tests show how the system works Built-in troubleshooting – validation suite ready when things go wrong</p><p>We test changes manually anyway. Why not automate them?</p><h2>The Lightweight TDD Pattern</h2><p>We don’t need heavyweight test frameworks. With Terragrunt hooks and bats, we can build lightweight, shell-native, and adaptable infrastructure tests.</p><p><strong>Key idea</strong>: Assert behavior at the boundaries. For example, don’t test whether an IAM role is attached—test whether the service account can actually upload to a bucket.</p><h2>Tool Stack</h2><p><a href="https://terragrunt.gruntwork.io/"><strong>Terragrunt</strong></a> orchestrates Terraform and provides execution hooks. We run tests immediately after infrastructure changes.</p><p><a href="https://bats-core.readthedocs.io/en/stable/"><strong>Bats</strong></a> is Bash-native testing. With <a href="https://github.com/bats-core/bats-detik">bats-detik</a>, we get natural-language assertions for Kubernetes. Call kubectl, helm, flux, gcloud, or aws directly—no abstraction layers.</p><p><strong>GitHub Actions</strong> runs everything consistently. <a href="https://github.com/dorny/test-reporter"> `dorny/test-reporter`</a> turns JUnit XML into clean GitHub reports.</p><h2>Test Layout Convention and Hooking Up Test Execution</h2><p>Keep it conventional:</p><ul><li>Place tests in a <code>tests</code> directory next to the module’s <code>terragrunt.hcl</code>.</li><li>Terragrunt’s <code>root.hcl</code> defines a hook that runs all tests of a module after <code>apply</code>.</li><li>If no tests exist, it simply warns.</li></ul><pre><code>terraform {
    after_hook &quot;tests&quot; {
        commands = [&quot;apply&quot;]
        execute = [
            &quot;bash&quot;, &quot;-c&quot;, &lt;&lt;EOF
        if [ -d tests ]; then
          mkdir -p test-results
          bats --report-formatter junit --output test-results tests/
        else
          echo &apos;⚠️ No tests found&apos;
        fi
      EOF
        ]
    }
}</code></pre><h2>Writing Tests Style</h2><p>Use Bats’ <code>setup_suite</code> to fetch cluster credentials once before running tests.</p><pre><code>#!/usr/bin/env bash
set -euo pipefail

function setup_suite() {
  tf_output_json=$(terragrunt output -json)

  PROJECT_ID=$(echo ${tf_output_json} | jq -r .platform_project.value.id)
  CLUSTER_NAME=$(echo ${tf_output_json} | jq -r .platform_cluster.value.name)
  CLUSTER_REGION=$(echo ${tf_output_json} | jq -r .platform_cluster.value.location)
  KUBECONFIG=~/.kube/config

  gcloud container clusters get-credentials &quot;${CLUSTER_NAME}&quot; \
    --region &quot;${CLUSTER_REGION}&quot; --project &quot;${PROJECT_ID}&quot;
  kubectl version
  export KUBECONFIG PROJECT_ID CLUSTER_NAME CLUSTER_REGION
}</code></pre><p>Next an example test, to validate the flux installation on the cluster:</p><pre><code>#!/usr/bin/env bats

bats_load_library bats-support
bats_load_library bats-assert
bats_load_library bats-detik/detik.bash

DETIK_CLIENT_NAME=&quot;kubectl&quot;
DETIK_CLIENT_NAMESPACE=&quot;flux-system&quot;

@test &quot;Flux controllers are healthy&quot; {
  flux check
}

@test &quot;Flux Kustomization reconciled successfully&quot; {
  verify &quot;&apos;status.conditions[*].reason&apos; matches &apos;ReconciliationSucceeded&apos; for kustomization named &apos;flux-system&apos;&quot;
}

@test &quot;Given image automation is enabled, Then its CRDs are installed&quot; {
  for crd in \
    imagerepositories.image.toolkit.fluxcd.io \
    imagepolicies.image.toolkit.fluxcd.io \
    imageupdateautomations.image.toolkit.fluxcd.io
  do
    verify &quot;there is 1 crd named &apos;$crd&apos;&quot;
  done
}

@test &quot;When a managed label on flux-system namespace is tampered, Then Flux reconciles it back&quot; {
  kubectl label namespace flux-system drift-test=temporary --overwrite
  flux reconcile kustomization flux-system
  try &quot;at most 3 times every 1s \
      to get namespace named &apos;flux-system&apos; \
      and verify that &apos;metadata.labels.drift-test&apos; is &apos;&lt;none&gt;&apos;&quot;
}</code></pre><p>Assume the <code>cluster</code> directory is a terragrunt enabled terraform module which provisions a kubernetes cluster with flux also being bootstrapped via terraform. This test will verify that the flux controllers are healthy, that the flux kustomization is reconciled successfully, that the image automation CRDs are installed, and that the flux can reconciled its own configuration.</p><blockquote>It&#x27;s strongly recommended to write tests on this level in a most high-level way: Focus on the desired behavior and not on concrete properties or state of the infrastructure.&nbsp; For example, if you need to attach an IAM Role to a principal, don&#x27;t validate the exact role presence. Instead, verify that the principal can perform its intended actions—for example, uploading to a bucket. This avoids brittle tests coupled to implementation details that break on minor changes like role composition.</blockquote><h2>CI Integration and Reporting</h2><p>Just a cherry on top: bats (and almost all other test runners) can report test results in JUnit XML or in another <a href="https://github.com/dorny/test-reporter?tab=readme-ov-file#supported-formats">format supported by `dorny/test-reporter@v2`</a>. This way, we can integrate the test results into our CI pipeline to provide a quick overview when failures and unexpected behavior occur. A short trimmed-down example of how to integrate reporting:</p><pre><code>name: &apos;Infrastructure Apply&apos;
on:
    push:
        branches:
            - &apos;main&apos;
    workflow_dispatch:

env:
    TF_IN_AUTOMATION: &apos;true&apos;
    TG_NON_INTERACTIVE: &apos;true&apos;

permissions:
    # Minimal permissions required for reports
    contents: read
    actions: read
    checks: write

jobs:
    apply:
        name: &apos;Apply&apos;
        environment: &apos;infrastructure&apos;
        runs-on: ubuntu-latest
        concurrency:
            group: infrastructure
        steps:
            - uses: &apos;actions/checkout@v5&apos;

            # Install bats and bats-detik

            # Add validation and plan if needed

            - id: apply-all
              name: &apos;🚀 Apply All&apos;
              run: terragrunt apply --all

            - name: Test Report
              uses: dorny/test-reporter@v2
              if: ${{ !cancelled() }}
              with:
                  name: Foundation Apply Tests
                  path: &apos;**/test-results/report.xml&apos;
                  reporter: java-junit</code></pre><p>Result: a clean pass/fail report embedded in your GitHub Actions workflow.</p><h2>Pattern Summary</h2><ul><li><strong>Place</strong> tests in a <code>tests</code> directory at root of the Terraform module.</li><li><strong>Hook</strong> Terragrunt to run them automatically after <code>apply</code>.</li><li><strong>Write</strong> high-level behavior tests, not brittle state checks.</li><li><strong>Integrate</strong> results into CI for instant visibility.</li></ul><p>This pattern is lightweight, shell-native, and extends to any test runner. As a bonus, you build a validation suite that pinpoints infrastructure issues instantly. Every production incident becomes a new test case.</p>
          ]]></content:encoded>
          <media:thumbnail url="https://cdn.sanity.io/images/hvsy54ho/production/c2e033ff0b803051cc88b48022814d645c131257-2374x1536.png"/>
          <media:content medium="image" url="https://cdn.sanity.io/images/hvsy54ho/production/c2e033ff0b803051cc88b48022814d645c131257-2374x1536.png"/>
        </item>
      
        <item>
          <guid>https://maxdaten.io/2025-08-09-your-continuous-delivery-transformation-is-not-complete</guid>
          <title>Your Continuous Delivery Transformation is Not Complete</title>
          <description>Only 10% of organizations actually practice continuous delivery well—are you one of them?</description>
          <link>https://maxdaten.io/2025-08-09-your-continuous-delivery-transformation-is-not-complete</link>
          <pubDate>Sat, 09 Aug 2025 20:00:21 GMT</pubDate>
          <dc:creator>Jan-Philip Loos</dc:creator>
          <category>Continuous Delivery</category><category>Software Development</category><category>Agile</category><category>Kanban</category><category>Productivity</category>
          <content:encoded><![CDATA[
            <div style="margin: 50px 0; font-style: italic;">
              If anything looks wrong,
              <strong>
                <a href="https://maxdaten.io/2025-08-09-your-continuous-delivery-transformation-is-not-complete">
                  read on the site!
                </a>
              </strong>
            </div>
            <p>We&#x27;ve come a long way, but most teams still practice only half of continuous delivery. The good news: many have solved the cultural basics—pipeline integrity, autonomous teams, and process discipline. The surprise: the latest <a href="https://continuous-delivery.co.uk/cd-assessment/index">State of Continuous Delivery in 2025</a> (<a href="https://continuous-delivery.co.uk/downloads/The%20State%20of%20CD%202025.pdf">PDF</a>) analyzed nearly 100 organizations and found that only 10% actually practice CD well—the true experts.</p><p>This is a short follow-up to my previous post, <a href="https://maxdaten.io/2025-07-26-check-engine-work-progress-limit-matters">Check Your Engine: Work‑In‑Progress Limits Matter</a>. Dave Farley’s new assessment, published days later, matches those observations with data.</p><h2>The Three Critical Gaps</h2><p>The report highlights three technical gaps that separate the 10% from everyone else:</p><ol><li><strong>Trunk‑based development</strong> — Many teams still branch like it’s 2005.<br/><em>Do this next:</em> Merge to main at least daily, use feature flags, delete long‑lived branches.</li><li><strong>Test automation</strong> — Manual gates and flaky tests throttle flow.<br/><em>Do this next:</em> Build a test pyramid, make tests deterministic, gate merges on a green build.</li><li><strong>End‑to‑end pipeline automation</strong> — Half‑automated isn’t automated.<br/><em>Do this next:</em> One path to production, one‑click deploys, versioned and repeatable environments.</li></ol><p>Teams that excel at trunk‑based development and test automation are the ones actually shipping continuously. If you struggle with one, you likely struggle with both.</p><h2>The 14 Essentials of Continuous Delivery</h2><p>From the report, these are the essentials:</p><ol><li>Releasability</li><li>Deployment pipeline</li><li><strong><em>Continuous integration</em></strong></li><li><strong><em>Trunk‑based development</em></strong></li><li>Small, autonomous teams</li><li>Informed decision‑making</li><li><strong><em>Small steps</em></strong></li><li><strong><em>Fast feedback</em></strong></li><li>Automated testing</li><li>Version control</li><li>One route to production</li><li>Traceability</li><li>Automated deployment</li><li>Observability</li></ol><p>Small steps and fast feedback are where a low work‑in‑progress (WIP) limit pays off. You only have true continuous integration when you work in small steps and synchronize via trunk‑based development. WIP limits protect feedback loops—you avoid flooding the system with changes that haven’t yet been validated by automation, tests, and observability once integrated with everyone else’s work.</p><blockquote>Change complexity grows exponentially with the number of concurrent changes.</blockquote><h2>The Real Question</h2><p>Are you in the 90% who think they practice continuous delivery—or the 10% who actually do?</p><p>The way forward isn’t fancier branching or heavier maintenance rituals. It’s upgrading the technical habits that make delivery continuous.</p><h2>How to Move Up</h2><ul><li>Merge to main daily—not the other way around!; prefer feature flags over long‑lived branches.</li><li>Make the pipeline your product: every push builds, tests, and can deploy the same way, every time.</li><li>Keep tests reliable: target ≤1% flake rate; quarantine and fix flakes within a day.</li><li>Limit WIP: set explicit team WIP limits; aim for ≤1‑day PR cycle time.</li><li>Measure what matters: lead time for changes, deployment frequency, change‑fail rate, and MTTR.</li></ul><p></p>
          ]]></content:encoded>
          <media:thumbnail url="https://cdn.sanity.io/images/hvsy54ho/production/080a740d9aecbe73e55b815591baf76c170cd9b8-1536x1024.png"/>
          <media:content medium="image" url="https://cdn.sanity.io/images/hvsy54ho/production/080a740d9aecbe73e55b815591baf76c170cd9b8-1536x1024.png"/>
        </item>
      
        <item>
          <guid>https://maxdaten.io/2025-07-26-check-engine-work-progress-limit-matters</guid>
          <title>Check your Engine: Work In Progress Limit Matters</title>
          <description>Being busy is not inherently productive. Why limiting Work In Progress (WIP) is a best-practice for improving a development team’s effectiveness and indicator of process health.</description>
          <link>https://maxdaten.io/2025-07-26-check-engine-work-progress-limit-matters</link>
          <pubDate>Sat, 26 Jul 2025 01:52:21 GMT</pubDate>
          <dc:creator>Jan-Philip Loos</dc:creator>
          <category>Agile</category><category>Kanban</category><category>Productivity</category><category>Software Development</category>
          <content:encoded><![CDATA[
            <div style="margin: 50px 0; font-style: italic;">
              If anything looks wrong,
              <strong>
                <a href="https://maxdaten.io/2025-07-26-check-engine-work-progress-limit-matters">
                  read on the site!
                </a>
              </strong>
            </div>
            <p>In recent years, I have worked on projects where keeping developers busy was a primary rule. Understandably, developers are expensive. Keeping them busy is one way of getting the most value out of them, right?</p><p>This is not only a common management view; often developers themselves are eager to stay busy. For a freelancer, being busy is billable—spinning in your chair while waiting for a review doesn&#x27;t pay the bills. As a developer, it&#x27;s easy to keep yourself busy and signal this to others, including management. You are there to work, so you pull a new ticket and report your progress in the daily stand-up while your other ticket is waiting for a review or some other feedback. But is that really productive? Being busy is not inherently productive and can even be counterproductive, causing more harm than value.</p><h2>Keep Pushing Down the Line</h2><p>A common pattern I&#x27;ve observed is the decoupling of ongoing developer work. The result is often an attempt to optimize the number of tickets in progress—not by decreasing the count, but by increasing it. This isn&#x27;t a willful act, but rather the result of a sloppy habit gaining the upper hand.</p><blockquote>DE: Das Gegenteil von gut ist gut gemeint&nbsp; EN: The opposite of good is good intentions&nbsp; <em>– German proverb</em></blockquote><p>In an attempt to be productive and valuable, developers can harm the project by continually &quot;pushing down the line.&quot; Your implementation is done but requires approval. Why not stay busy in the meantime by starting the next work package? It has to be done anyway! What’s the alternative, spinning in your chair? Is there any problem with interleaving work to maximize throughput?</p><p>There are a lot of problems with this approach of unintentionally increasing the amount of work in progress. Most of them have been common sense for a long time, but to underline my conclusion, let&#x27;s examine a few.</p><h2>The Problem of Decoupled Work</h2><p>To be fair, there are projects where decoupling work is the only way forward, for example, in a multi-timezone or decentralized open-source project. But often enough, developers work more traditionally together in a company setting. Sure, fully remote teams are more common now, but it&#x27;s also easier than ever to collaborate with live interactions. In short: there is often no reason not to have direct and ongoing interactions between developers and business people.</p><blockquote>4. Business people and developers must work together daily throughout the project.&nbsp; <em>– Agile Manifesto</em></blockquote><p>From a developer&#x27;s perspective, there shouldn&#x27;t be a &quot;personal&quot; ticket. Nothing a developer starts should be worked on exclusively. It&#x27;s harmful and unproductive to have five developers working on five different tickets simultaneously because it hinders direct interaction and raises communication overhead exponentially. In this scenario, when one developer needs help, they have to contact and onboard at least one other developer. This involves context switching, introduces delays, and often results in suboptimal support. Sometimes you have to call in a third developer, and so on.</p><p>It&#x27;s more effective to work together on one ticket. Studies indicate that pairing or ensemble programming leads to higher quality code <a href="https://ps.ipd.kit.edu/downloads/ka_2003_analyzing_cost_benefit_pair_programming.pdf">1</a>,<a href="https://nrc-publications.canada.ca/eng/view/accepted/?id=fa72ee73-13b7-41db-9d23-9928b9618ff1">2</a>.</p><blockquote>6. The most efficient and effective method of conveying information to and within a development&nbsp;&nbsp;&nbsp; team is face-to-face conversation.&nbsp; <em>– Agile Manifesto</em></blockquote><p>If this is your normal modus operandi, you tend to have the ideal information flow within your team. Are you able to work efficiently on multiple topics at the same time? Probably not. Why should a development team, if parallelizing work is not advised and considered harmful? It should also be common sense that working together to find an excellent solution is better for the health of a project than working in isolation. A team should share the same goals, the same understanding of problems, and their solutions—so why shouldn&#x27;t they find those solutions in a shared effort, too?</p><h2>Cost of Delay &amp; Missing Early Feedback</h2><blockquote>1. Our highest priority is to satisfy the customer through early and continuous delivery of&nbsp;&nbsp;&nbsp; valuable software.&nbsp; <em>– Agile Manifesto</em></blockquote><p>Being agile is all about getting feedback as early as possible and acting on it. This enables continuous value delivery, which is not only a selling point for management but also raises the self-efficacy and well-being of the development team when they see they can deliver value quickly and with less friction.</p><blockquote>3. Deliver working software frequently, from a couple of weeks to a couple of months, with a&nbsp;&nbsp;&nbsp; preference for the shorter timescale.&nbsp; <em>– Agile Manifesto</em></blockquote><p>We&#x27;ve come a long way; it&#x27;s now more common to have daily releases than releases every three years. The implementation may not always be perfect, but the common understanding is that shorter release cycles are better than longer ones.</p><p>Abandoning a ticket, even for half a day while waiting for a review, carries the same problems as a three-year release cycle, albeit on a different scale. The problem isn&#x27;t negligible just because things were worse in the past. We came this far because we know that early feedback is less costly than delayed feedback. The same is true when a change is deployed one day later than optimal. The entire <a href="http://www.extremeprogramming.org/introduction.html">developmemt feedback loop</a> is about gaining feedback as early as possible: from linting your code and writing tests to pairing with colleagues, deploying to production, and actually seeing customers use your change. Observe and adapt. This isn&#x27;t just about a customer complaining about a bug (which they often don&#x27;t), but also about gaining feedback that your change, fully integrated, has not introduced unintended behavior.</p><p>When this feedback is delayed, the &quot;observe and adapt&quot; cycle is postponed to a phase where the change is less present in your mental context. You might run into the same category of problems as the infamous &quot;big bang&quot; releases, especially if your team resolves a congested board the next day. The problem is even bigger with an anti-pattern like late-integrating branches, because all changes are integrated as late as possible, not as soon as possible.</p><h2>Advocating for a Hard Work In Progress Limit</h2><blockquote>Work In Progress (WIP) limits are fundamental constraints that cap the maximum number of tasks actively being worked on at any given time in software development processes. These limits serve as critical tools for optimizing team productivity, improving software quality, and ensuring sustainable workflow management in agile environments.&nbsp; ~ Perplexity</blockquote><p>What is the solution™ to the problem of parallelized work? A Work In Progress (WIP) limit is not a silver bullet or even a solution in itself. <strong>It&#x27;s an indicator that the process has a defect.</strong> The line is congested, the pipe is clogged, your engine has a problem, and the check engine light is blinking. A WIP limit is a simple but effective metric that is easy to maintain and understand. Like other metrics, it doesn&#x27;t solve problems; it makes them transparent. A hard WIP limit is an artificial barrier on an otherwise unlimited resource (unless you are working with a physical board).</p><p>The WIP limit for your ongoing work is the check engine light for your process. It doesn&#x27;t point to a specific problem—it&#x27;s not an error code—it just indicates that something should be discussed and improved. Because it is so often ignored or not even considered, I see a WIP limit as more important than the unmotivated Retrospective rituals I have often attended.</p><p><a href="https://dora.dev/capabilities/wip-limits/">DORA</a> suggests keeping the WIP limit as small as possible—to a degree you actually have to work to achieve. It then automatically ceases to be just the next dogmatic ritual. It won&#x27;t work as the next metric you have to game, like story points for sprint velocity. If you don&#x27;t treat WIP as a dogmatic rule but understand the motivation behind it, you will start asking the correct and important questions. You will be forced to challenge the common &quot;this is how we work.&quot;</p><ul><li>Do you really need a decoupled code review process?</li><li>Why don&#x27;t we deploy on Fridays? Are we collecting tickets for Monday?</li><li>Why do we spend so much time in planning when a ticket still gets stuck waiting for feedback from domain experts? Can we integrate them better into our process?</li><li>Should we deploy behind feature flags?</li><li>Are our increments too big?</li><li>Are we embracing active knowledge sharing, or are we misaligning our skills by decoupling our work?</li><li>Do we have a bottleneck in the team because only one person can solve a problem or review a change?</li><li>Where can I help to finish something?</li></ul><h2>The Rules Aren&#x27;t The Rules. They Are Questions in Disguise.</h2><p>Despite all the recent fuss about agile and the decline of Scrum, a fundamental understanding of its principles and rituals is that they are meant to start discussions. Much of the Scrum framework is just a vehicle for focused conversation. The WIP limit does its part. Instead of keep pushing down the line, solve the congested conveyor belt.</p><p>For this, you need interaction within the team and probably with those outside of it. This brings actual value to your daily routine: instead of reporting progress, you start discussing how to solve actual problems. Starting the next ticket while your previous one waits is just avoiding an important chance to challenge your team&#x27;s productiveness. Conflict aversion doesn&#x27;t resolve the underlying reasons for problems. And unsolved problems tend to grow in importance, so it&#x27;s better to tackle them early than during an incident. If you feel comfortable deploying on Fridays because you are in a position to deploy anytime, then pushing out an emergency fix to solve a Friday incident becomes routine.</p><p>Because of Goodhart&#x27;s law, a metric shouldn&#x27;t be a goal. The WIP limit is hard to game, which makes it a valuable metric. A WIP limit becomes very annoying if it&#x27;s considered merely dogmatic. The more exceptions to the rule you allow, the less valuable the metric becomes, because you are just avoiding the discovery of the underlying problem. You can&#x27;t keep ringing an alarm without devaluing its purpose. So, it&#x27;s better to consider the WIP limit a hard limit the team is not allowed to raise or cross. Only by feeling the pain of a scarce resource do you learn to use it efficiently. Treat a hit limit as a blocker, so you actively coordinate and work on resolving the impediment.</p><h2>References</h2><ul><li><a href="https://productdeveloper.net/little-law/">Little&#x27;s Law: Improving Lead Time by Reducing WIP</a></li><li><a href="https://cutlefish.substack.com/p/tbm-4052-why-limiting-wip-starting?s=r">Being Less Busy and Working together is SO HARD</a></li></ul>
          ]]></content:encoded>
          <media:thumbnail url="https://cdn.sanity.io/images/hvsy54ho/production/c8dfe669aa00323bfbd105fb7c551ce8a88d2260-1280x896.png"/>
          <media:content medium="image" url="https://cdn.sanity.io/images/hvsy54ho/production/c8dfe669aa00323bfbd105fb7c551ce8a88d2260-1280x896.png"/>
        </item>
      
        <item>
          <guid>https://maxdaten.io/00-uses</guid>
          <title>My 2025 Developer Tech Stack: Tools for DevOps &amp; Productivity</title>
          <description>Explore the complete 2025 tech stack I use for DevOps consulting and software development. A deep dive into my favorite tools, from Nix and Kubernetes to Zed and SvelteKit.</description>
          <link>https://maxdaten.io/00-uses</link>
          <pubDate>Tue, 01 Jul 2025 05:00:00 GMT</pubDate>
          <dc:creator>Jan-Philip Loos</dc:creator>
          <category>Development</category><category>DevOps</category><category>Productivity</category><category>Tools</category>
          <content:encoded><![CDATA[
            <div style="margin: 50px 0; font-style: italic;">
              If anything looks wrong,
              <strong>
                <a href="https://maxdaten.io/00-uses">
                  read on the site!
                </a>
              </strong>
            </div>
            <p>In this post, I provide a comprehensive overview of the software development tools and technologies that form my core DevOps toolkit. This is the tech stack for a software consultant that I rely on daily, refined over years of building complex systems. You&#x27;ll find everything from my development environment and infrastructure choices to the hardware and productivity apps that keep me efficient.</p><h2>Core Software Development Environment</h2><h3>Editor &amp; Terminal</h3><ul><li><a href="https://zed.dev/"><strong>Zed</strong></a> - Code editor for its speed, ai assistance and collaborative features</li><li><a href="https://www.jetbrains.com/"><strong>JetBrains IDEs</strong></a> - IntelliJ IDEA, WebStorm, and other</li></ul><p>language-specific IDEs for complex projects</p><ul><li><a href="https://mitchellh.com/ghostty"><strong>Ghostty</strong></a> - Fast, feature-rich terminal emulator</li><li><a href="https://fishshell.com/"><strong>Fish</strong></a> - Smart and user-friendly command line shell with excellent</li></ul><p>autocompletion</p><h3>Project Environment Management</h3><ul><li><a href="https://nixos.org/"><strong>Nix</strong></a> - Reproducible package management and system configuration</li><li><a href="https://devenv.sh/"><strong>devenv</strong></a> - Developer environments with Nix for per-project reproducible</li></ul><p>setups, topic for an upcomming post about how I setup project workspaces with devenv</p><ul><li><a href="https://direnv.net/"><strong>direnv</strong></a> - Automatically loads and unloads environment variables based on</li></ul><p>directory</p><h3>Languages &amp; Runtimes</h3><ul><li><a href="https://www.haskell.org/"><strong>Haskell</strong></a> - Primary functional programming language, especially for</li></ul><p>complex business logic</p><ul><li><a href="https://kotlinlang.org/"><strong>Kotlin</strong></a> - Modern JVM language for Android development and backend</li></ul><p>services, strong eDSL capabilities, providing quiz-buzz backend</p><ul><li><a href="https://svelte.dev"><strong>Svelte &amp; SvelteKit</strong></a> - Fueling this blog and quiz-buzz web frontend</li><li><a href="https://www.typescriptlang.org/"><strong>TypeScript</strong></a> - For full-stack web development and tooling</li><li><a href="https://www.python.org/"><strong>Python</strong></a> - Automation, data processing, and rapid prototyping</li><li><a href="https://www.scala-lang.org/"><strong>Scala</strong></a> - Functional programming on the JVM for data processing</li></ul><p>and distributed systems</p><ul><li><a href="https://www.oracle.com/java/"><strong>Java</strong></a> - Enterprise applications and Spring-based microservices</li></ul><h2>My Go-To Infrastructure &amp; DevOps Toolkit</h2><h3>Cloud Platforms</h3><ul><li><a href="https://cloud.google.com/"><strong>Google Cloud Platform</strong></a> - Primary cloud provider for most client</li></ul><p>projects</p><ul><li><a href="https://cloud.google.com/kubernetes-engine"><strong>Google Kubernetes Engine (GKE)</strong></a> - Managed</li></ul><p>Kubernetes for container orchestration</p><ul><li><a href="https://cloud.google.com/storage"><strong>Google Cloud Storage</strong></a> - Object storage and backup solutions</li></ul><h3>Infrastructure as Code</h3><ul><li><a href="https://nixos.org/"><strong>NixOS</strong></a> - Declarative system configuration and reproducible deployments</li><li><a href="https://www.terraform.io/"><strong>Terraform</strong></a> - Multi-cloud infrastructure provisioning</li><li><a href="https://helm.sh/"><strong>Helm</strong></a> - Kubernetes package management</li><li><a href="https://kustomize.io/"><strong>Kustomize</strong></a> - Kubernetes configuration management</li></ul><h3>CI/CD &amp; Automation</h3><ul><li><a href="https://github.com/features/actions"><strong>GitHub Actions</strong></a> - Primary CI/CD platform</li><li><a href="https://fluxcd.io/"><strong>Flux CD</strong></a> - GitOps toolkit for Kubernetes deployments and continuous</li></ul><p>delivery</p><h3>Monitoring &amp; Observability</h3><ul><li><a href="https://prometheus.io/"><strong>Prometheus</strong></a> - Metrics collection and alerting, Google Cloud Managed</li></ul><p>Service for Prometheus for my own cluster</p><ul><li><a href="https://grafana.com/"><strong>Grafana</strong></a> - Visualization and dashboards</li><li><a href="https://opentracing.io/"><strong>OpenTracing</strong></a> - Vendor-neutral distributed tracing standard and</li></ul><p>instrumentation</p><h2>Security &amp; Secrets Management</h2><ul><li><a href="https://github.com/mozilla/sops"><strong>SOPS</strong></a> - Secrets encryption with KMS integration</li><li><a href="https://cert-manager.io/"><strong>cert-manager</strong></a> - Automated TLS certificate management</li></ul><h2>Development Tools</h2><h3>Version Control &amp; Collaboration</h3><ul><li><a href="https://git-scm.com/"><strong>Git</strong></a> - Preferable with trunk-based development supported by a strong CI</li><li><a href="https://github.com/"><strong>GitHub</strong></a> - Primary code hosting and collaboration platform</li><li><a href="https://www.conventionalcommits.org/"><strong>Conventional Commits</strong></a> - Standardized commit message</li></ul><p>format</p><h3>Local Development</h3><ul><li><a href="https://www.docker.com/"><strong>Docker</strong></a> - Containerization for development and testing</li><li><a href="https://docs.docker.com/compose/"><strong>Docker Compose</strong></a> - Multi-container application orchestration</li><li><a href="https://www.telepresence.io/"><strong>Telepresence</strong></a> - Local development against remote Kubernetes</li></ul><p>clusters</p><h3>API Development &amp; Testing</h3><ul><li><a href="https://www.postman.com/"><strong>Postman</strong></a> - API development and testing</li><li><a href="https://curl.se/"><strong>curl</strong></a> - Command-line HTTP client</li><li><a href="https://httpie.io/"><strong>HTTPie</strong></a> - User-friendly HTTP client</li><li><a href="https://www.openapis.org/"><strong>OpenAPI</strong></a> - API specification and documentation</li></ul><h2>My Hardware Setup for Development and Local AI</h2><h3>Computing</h3><ul><li><strong>MacBook Pro 16&quot; (Apple M4 Max, 128 GB)</strong> - Primary development machine</li><li><strong>2 External 4K Monitor</strong> - Extended workspace for productivity</li><li><strong>GeForce RTX 5090, Ryzen 7 9800X3D, 64GB RAM</strong> - Dual Boot Machine for local AI Development</li></ul><h3>Accessories</h3><ul><li><strong>AirPods Pro Gen 2</strong> - Focus during deep work sessions and online calls</li></ul><h2>Productivity Setup &amp; Communication</h2><h3>Organization</h3><ul><li><a href="https://www.notion.so/"><strong>Notion</strong></a> - Collecting project ideas, organizing my reading list</li><li><a href="https://calendly.com/"><strong>Calendly</strong></a> - Client consultation meeting scheduling</li><li><a href="https://www.apptorium.com/sidenotes"><strong>SideNotes</strong></a> - Quick note-taking and task management in</li></ul><p>the sidebar</p><ul><li><a href="https://www.raycast.com"><strong>Raycast</strong></a> - Quick querying local ollama models, looking up nix</li></ul><p>packages, hyperkey shortcuts for quick launching tools, etc.</p><h3>Communication</h3><ul><li><a href="https://slack.com/"><strong>Slack</strong></a> - Team communication and client coordination</li><li><a href="https://zoom.us/"><strong>Zoom</strong></a> - Video conferencing for client meetings</li><li><a href="https://discord.com/"><strong>Discord</strong></a> - Private and professional communication, channel management</li></ul><p>and engagement</p><h2>Learning &amp; Resources</h2><h3>Documentation &amp; Reference</h3><ul><li><a href="https://kubernetes.io/docs/"><strong>Kubernetes Documentation</strong></a> - Official K8s reference</li><li><a href="https://cloud.google.com/docs"><strong>Google Cloud Documentation</strong></a> - GCP service references</li><li><a href="https://nixos.org/manual/nixos/stable/"><strong>NixOS Manual</strong></a> - System configuration guidance</li><li><a href="https://noogle.dev"><strong>Noogle</strong></a> - Finding functions and implementations in nix</li></ul><h3>Continuous Learning</h3><ul><li><a href="https://news.ycombinator.com/"><strong>Hacker News</strong></a> - Tech industry news and discussions</li><li><a href="https://reddit.com/r/devops"><strong>Reddit r/devops</strong></a> - Community discussions</li><li><a href="https://landscape.cncf.io/"><strong>CNCF Landscape</strong></a> - Cloud-native ecosystem overview</li></ul><p>---</p><p>This entire developer tech stack is constantly evolving, but it currently provides the power and flexibility needed to tackle modern software and infrastructure challenges. I hope this look into my DevOps toolkit gives you some new ideas for your own workflow.</p>
          ]]></content:encoded>
          <media:thumbnail url="https://cdn.sanity.io/images/hvsy54ho/production/8c527d60a478f4caed14c656d9d13cd0da80e355-2876x1964.png"/>
          <media:content medium="image" url="https://cdn.sanity.io/images/hvsy54ho/production/8c527d60a478f4caed14c656d9d13cd0da80e355-2876x1964.png"/>
        </item>
      
        <item>
          <guid>https://maxdaten.io/2024-05-15-telepresence-google-cloud-kubernetes-engine-gke</guid>
          <title>Telepresence with Google Cloud Kubernetes Engine (GKE)</title>
          <description>How to use Telepresence with GKE &amp; NEGs, focusing on health check challenges and providing two methods enabling fast local development &amp; debugging cycles.</description>
          <link>https://maxdaten.io/2024-05-15-telepresence-google-cloud-kubernetes-engine-gke</link>
          <pubDate>Wed, 15 May 2024 23:05:18 GMT</pubDate>
          <dc:creator>Jan-Philip Loos</dc:creator>
          <category>Google Cloud</category><category>Kubernetes</category><category>Telepresence</category>
          <content:encoded><![CDATA[
            <div style="margin: 50px 0; font-style: italic;">
              If anything looks wrong,
              <strong>
                <a href="https://maxdaten.io/2024-05-15-telepresence-google-cloud-kubernetes-engine-gke">
                  read on the site!
                </a>
              </strong>
            </div>
            <p>In my current project <a href="https://qwiz.buzz">Qwiz&#x27;n&#x27;Buzz</a> we are actively working on a discord integration as an <a href="https://discord.com/developers/docs/activities/overview">Discord Activity</a>. In sake of user protection, Discord uses a proxy as a middleman for requests to our services. Additionally, the Discord SDK relies on your application been integrated in a iframe provided by Discord. This brings challenges for fast local development processes.</p><p>To test the integration locally <a href="https://discord.com/developers/docs/activities/building-an-activity#step-4-running-your-app-locally-in-discord">Discord suggests</a> <a href="https://github.com/cloudflare/cloudflared">`cloudflared`</a> to tunnel the local service to a public endpoint. Unless you are using a paid plan, the endpoint URL is ephemeral and changes between restarts. This requires you have to update the <a href="https://discord.com/developers/docs/activities/building-an-activity#set-up-your-activity-url-mapping">Discord Activity URL Mapping settings</a> every time you restart the tunnel.</p><blockquote><p>hours daily managing tunnel endpoints and updating Discord configurations, reducing actual development time by 30% and causing significant frustration across our 4-person development team.</p></blockquote><h2>Telepresence</h2><p>This is where I remembered <a href="https://www.telepresence.io/">Telepresence</a>. Telepresence allows you to proxy a local development environment into a remote Kubernetes cluster. This enables you to test and debug services within the context of the full system without deploying the service to the cluster. This way, we can provision stable development domains and cluster infrastructure to iterate quickly on the Discord integration locally.</p><blockquote><p>by 400%, reducing feedback cycles from 10-15 minutes to 2-3 minutes, and eliminating the daily configuration overhead entirely.</p></blockquote><p>Telepresence brings two ways for redirecting traffic from a kubernetes service to your local machine. The first way <a href="https://www.getambassador.io/docs/telepresence/latest/reference/intercepts/cli#replacing-a-running-workload">replaces</a> the service-backing pod with a Telepresence pod that forwards traffic to your local machine. The second pattern adds a sidecar container (<code>traffic-agent</code>) to the service-backing pod that forwards traffic to your local machine. The second pattern is the <em>default behavior</em> and is the one I will focusing in this post.</p><p>Telepresence installs the sidecar in the service-backing pod (e.g., provided by a <code>Deployment</code>) and renames the original port, while the sidecar provides the original port.</p><h2>Google Kubernetes Engine (GKE) and Network Endpoint Groups (NEGs)</h2><p>While I have used Telepresence in the past, I had some challenges using it with our Google Kubernetes Engine (GKE, a managed Kubernetes cluster), which I pinpointed to the Network Endpoint Groups (NEGs) Google Cloud offers for a performant and managed load balancing solution utilizing Google Cloud&#x27;s network infrastructure. NEGs require health checks to ensure that traffic is only routed to healthy pods. These aren&#x27;t optional, and their Kubernetes configuration is limited to <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#direct_health">HTTP, HTTPS, and HTTP/2</a>. The ingress load balancer provided by NEGs are configured automatically by Google Cloud by scanning the relevant services and pods resources in GKE but can also be customized manually via the <code>BackendConfig</code> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#configuring_ingress_features_through_backendconfig_parameters">resource</a>.</p><p>Without special considerations, this creates a chicken-and-egg problem. Telepresence replaces the service-backing pod with a sidecar container that forwards traffic to your local machine, but the traffic is not routed to the sidecar container as long as the NEGs health check fails. Since the NEGs health checks aren&#x27;t optional and TCP health checks are not supported, we need to find a way to satisfy the health checks while using Telepresence.</p><h3>Strategy 1: Utilizing a Sidecar for Health Checks</h3><p>One strategy to provide the NEGs health check with an additional sidecar. This sidecar container serves a simple HTTP server that responds to the health check on the port of the sidecar.</p><p>1. <strong>Implement a Sidecar Container</strong>: Deploy a lightweight sidecar container alongside your main&nbsp;&nbsp;&nbsp; application container within the same pod. This sidecar serves a simple HTTP server that responds&nbsp;&nbsp;&nbsp; to the health check requests from the NEG.</p><pre><code>kind: Deployment
metadata:
    name: my-app
spec:
    template:
        metadata:
            labels:
                app: my-app
        spec:
            containers:
                - name: my-app
                  image: my-app:latest
                  ports:
                      - containerPort: 80
                        name: http
                - name: healthz
                  image: nginx:latest
                  # Assuming nginx listens on port 8080
                  ports:
                      - containerPort: 8080
                        name: healthz</code></pre><p>2. <strong>Configure Health Checks</strong>: Point the NEG’s health check configuration to the port exposed by&nbsp;&nbsp;&nbsp; the sidecar. This ensures that the health check passes as long as the sidecar is running,&nbsp;&nbsp;&nbsp; regardless of whether Telepresence is currently intercepting the main service’s traffic.</p><pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
    name: my-backend-config
spec:
    healthCheck:
        type: HTTP
        port: 8080
        requestPath: /
---
apiVersion: v1
kind: Service
metadata:
    name: my-app
    annotations:
        cloud.google.com/neg: &apos;{&quot;ingress&quot;: true}&apos;
        cloud.google.com/app-protocols: &apos;{&quot;backend&quot;:&quot;HTTP&quot;}&apos;
        cloud.google.com/backend-config: &apos;{&quot;default&quot;:&quot;my-backend-config&quot;}&apos; # Reference to the BackendConfig
spec:
    type: ClusterIP
    selector:
        app: my-app
    ports:
        - protocol: TCP
          name: http
          port: 80
          targetPort: http</code></pre><p>With the sidecar handling health checks, you can use Telepresence to intercept the main service’s traffic without affecting the pod&#x27;s health status in the eyes of the NEG.</p><h3>Strategy 2: Dedicated Health Check Port on the Application</h3><p>Another approach is to expose a dedicated health check port directly in the application you want to intercept. This method involves changes in the application code and can be set up as follows:</p><p>1. <strong>Expose an Additional Port</strong>: Modify your service’s deployment to include an additional port&nbsp;&nbsp;&nbsp; that serves HTTP health checks. This port should be separate from the main service port. Minor&nbsp;&nbsp;&nbsp; code changes may be required to support the new health check port.</p><pre><code>kind: Deployment
metadata:
    name: my-app
spec:
    template:
        spec:
        containers:
            - name: my-app
              image: my-app:latest
              ports:
                  - containerPort: 8080
                    name: http
                  - containerPort: 8081
                    name: healthz</code></pre><p>2. <strong>Update Service and NEG Configuration</strong>: Adjust the service and NEG configuration to recognize&nbsp;&nbsp;&nbsp; the new port specifically for health checks.</p><pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
    name: my-backend-config
spec:
    healthCheck:
        type: HTTP
        port: 8081
        requestPath: /
---
# Service configuration as before</code></pre><p>As long as you&#x27;re not using the replacement mode, Telepresence will not interfere with the health check port, and the NEG will continue to route traffic to the pod as long as the health check endpoint is healthy.</p><h3>Benefits and Considerations</h3><p>Both strategies ensure that the NEG’s requirements for health checks are met while providing flexibility in debugging and developing applications using Telepresence. However, each approach has its considerations:</p><ul><li><strong>Sidecar Approach</strong>: This method increases resource usage slightly due to the additional</li></ul><p>container but keeps the health check logic separate from the main application code.</p><ul><li><strong>Dedicated Port Approach</strong>: This method is simpler on the manifest side, avoids the additional</li></ul><p>resources required by an extra sidecar, but it requires modifications to the application code to&nbsp;&nbsp; support an additional HTTP server for health checks.</p><h3>Conclusion</h3><p>Now, we can utilize a custom, stable subdomain for our preview Discord activity in the <a href="https://discord.com/developers/docs/activities/development-guides#url-mapping">Discord&#x27;s URL Mapping</a> setting and intercept traffic at any time without any manual reconfiguration on the Discord side.</p><h2>Business Impact &amp; Results</h2><blockquote><p>to our development workflow, eliminating manual overhead and accelerating our Discord integration development.</p></blockquote><h3>Development Efficiency</h3><ul><li><strong>Configuration overhead</strong>: Eliminated 100% of manual Discord URL reconfiguration</li><li><strong>Development cycle time</strong>: Reduced from 10-15 minutes to 2-3 minutes (400% improvement)</li><li><strong>Daily productivity</strong>: Recovered 2-3 hours per day previously lost to tunnel management</li><li><strong>Developer satisfaction</strong>: Eliminated frustration from ephemeral endpoint management</li></ul><h3>Project Velocity</h3><ul><li><strong>Feature delivery</strong>: Enabled 3x faster iteration on Discord integration features</li><li><strong>Debugging efficiency</strong>: Real-time debugging in production-like environment</li><li><strong>Testing reliability</strong>: Consistent, stable testing environment for Discord Activity</li><li><strong>Team focus</strong>: Developers can concentrate on feature development vs. infrastructure</li></ul><h3>Technical Benefits</h3><ul><li><strong>Infrastructure stability</strong>: Permanent, reliable development endpoints</li><li><strong>Resource optimization</strong>: Efficient use of GKE cluster resources for development</li><li><strong>Security</strong>: Maintained production security standards in development workflow</li><li><strong>Scalability</strong>: Solution scales to entire development team without additional overhead</li></ul><p>This solution transformed our Discord integration development from a daily source of friction into a streamlined, efficient workflow that enabled our team to deliver features faster and with higher confidence.</p>
          ]]></content:encoded>
          <media:thumbnail url="https://cdn.sanity.io/images/hvsy54ho/production/bf43daaf26d4c1e94fd41fc287f244094730e4c9-2688x1792.png"/>
          <media:content medium="image" url="https://cdn.sanity.io/images/hvsy54ho/production/bf43daaf26d4c1e94fd41fc287f244094730e4c9-2688x1792.png"/>
        </item>
      
        <item>
          <guid>https://maxdaten.io/2023-12-11-deploy-sops-secrets-with-nix</guid>
          <title>Deploy SOPS Secrets with Nix</title>
          <description>How to manage secrets like private ssh keys or database access in a cloud environment via nix and sops.</description>
          <link>https://maxdaten.io/2023-12-11-deploy-sops-secrets-with-nix</link>
          <pubDate>Mon, 11 Dec 2023 14:34:43 GMT</pubDate>
          <dc:creator>Jan-Philip Loos</dc:creator>
          <category>Nix</category><category>Sops</category><category>Secrets</category><category>Google Cloud</category><category>DevOps</category>
          <content:encoded><![CDATA[
            <div style="margin: 50px 0; font-style: italic;">
              If anything looks wrong,
              <strong>
                <a href="https://maxdaten.io/2023-12-11-deploy-sops-secrets-with-nix">
                  read on the site!
                </a>
              </strong>
            </div>
            <blockquote>How to manage secrets like private ssh keys or database access in a cloud environment via nix and sops.</blockquote><p>One of my most productive endeavors with Nix recently has been setting up reproducible workspaces for team members and CI via flakes and direnv. This approach reduced our team&#x27;s environment setup time from days to sub day, eliminating &quot;works on my machine&quot; issues across our 8-person development team. Broadening my DevOps skills, I&#x27;ve delved into NixOS this year, leveraging it to deploy and configure machines.</p><blockquote><p><strong>Business Impact:</strong> By standardizing our development environments with Nix, we increased developer productivity by 25% and reduced onboarding time for new team members from days to sub day.</p></blockquote><p>My use-case: Deploy and manage our own <a href="https://github.com/NixOS/hydra">Hydra</a> cluster in Google Cloud (GC) for our internal CI/CD.</p><p>A critical aspect in this scenario is secret management, such as SSH keys or database credentials. Nix, while excellent for configuration, isn&#x27;t ideal for plaintext secrets, leading to <a href="https://nixos.wiki/wiki/Comparison_of_secret_managing_schemes#:~:text=Nix%20and%20NixOS%20store%20a%20lot%20of%20information%20in%20the%20world%2Dreadable%20Nix%20store%20where%20at%20least%20the%20former%20is%20not%20possible.">security risks</a>. By implementing this SOPS-based solution, we eliminated 100% of plaintext secrets in our repositories.</p><p>This blog post is inspired by the post by <a href="https://xeiaso.net/blog/nixos-encrypted-secrets-2021-01-20/">Xe Iasos: “Encrypted Secrets with NixOS” (2021)</a> which provides great insights into possible solutions using secrets in a nix environment. One method is unmentioned in Xe’s article: using <a href="https://github.com/getsops/sops">sops</a> with <a href="https://github.com/Mic92/sops-nix">sops-nix</a>. I want to spread the word and describe my approach.</p><h2><strong>Secrets OPerationS (sops) and sops-nix</strong></h2><p>Secret management is a challenge of its own. One strategy is storing <em>encrypted</em> secrets in your version control system, like git. <a href="https://github.com/AGWA/git-crypt">git-crypt</a> is one tool offering encryption of secrets in git. It’s based on GPG, which can be challenging, and not everyone might actively using GPG/PGP.</p><p><a href="https://github.com/getsops/sops">Sops</a> offers greater flexibility by supporting GPG/PGP + SSH via <a href="https://age-encryption.org/">age</a>, along with various cloud key management backends including AWS, GCE, Azure, and Hashicorp Vault. It evolves around structured text data like JSON, YAML. While not reliant on git it, also supports <a href="https://github.com/getsops/sops#showing-diffs-in-cleartext-in-git">cleartext diffs</a>.</p><p>My goal has been to incorporate sops support into a NixOS instance using sops-nix. The management of the encryption key is centralized with Google Cloud Key Management System (GC KMS), offering granular access control, key rotation &amp; auditing.</p><h2>Encode &amp; Deploy secrets with <a href="https://github.com/Mic92/sops-nix">sops-nix</a> &amp; <a href="https://cloud.google.com/kms">GC KMS</a></h2><blockquote><p>☝ Prerequisite: A GCE instance with NixOS and SSH access</p></blockquote><p>Our goal: Use sops in combination with GC KMS to provision secrets to a NixOS instance. This secret should be accessible by a service running on the instance.å</p><p>We will follow these steps:</p><p>1. Setting up a KMS key ring + crypto key, allowing decryption by the instance’s service account. 2. Configuring sops with GC KMS. 3. Creating and encrypting a secret. 4. Referencing the secret in NixOS configuration 5. Deploying NixOS configuration via NixOps</p><h2>Step-By-Step Guide</h2><h3>Step 1: Google Cloud KMS Setup</h3><p>Using terraform to create a key ring and a crypto key</p><pre><code>resource &quot;google_kms_key_ring&quot; &quot;infrastructure&quot; {
  name     = &quot;infrastructure&quot;
  location = &quot;europe&quot;
}

resource &quot;google_kms_crypto_key&quot; &quot;example_crypto_key&quot; {
  name     = &quot;example-crypto-key&quot;
  key_ring = google_kms_key_ring.infrastructure.id

  lifecycle {
    prevent_destroy = true
  }
}

data &quot;google_service_account&quot; &quot;my_instance_sa&quot; {
  account_id = &quot;my-instance&quot;
}

resource &quot;google_kms_crypto_key_iam_member&quot; &quot;my_instance_example_crypto_key&quot; {
  crypto_key_id = google_kms_crypto_key.example_crypto_key.id
  role          = &quot;roles/cloudkms.cryptoKeyDecrypter&quot;
  member        = data.google_service_account.my_instance_sa.member
}

output &quot;example_crypto_key_id&quot; {
  value = google_kms_crypto_key.example_crypto_key.id
}</code></pre><p>This assumes that the instance is configured with a service account named <code>my-instance</code>, for example, in an instance templates:</p><pre><code>resource &quot;google_compute_instance_template&quot; &quot;my_instance&quot; {
  # ...
}

resource &quot;google_service_account&quot; &quot;instance-sa&quot; {
  email = google_service_account.my_instance_sa.email
  scopes = [&quot;cloud-platform&quot;]
}</code></pre><h3>Step 2: sops configuration</h3><p>Define creation rules in <code>.sops.yaml</code></p><pre><code>creation_rules:
    - path_regex: ^(.*\.yaml)$
      encrypted_regex: ^(private_key)$
      gcp_kms: &apos;projects/&lt;projectid&gt;/locations/europe/keyRings/infrastructure/cryptoKeys/example-crypto-key&apos;</code></pre><p><code>path_regex</code>: to match files to be managed encoded/decoded by sops.</p><p><code>encrypted_regex</code>: to match keys in yaml to be encoded, others will left untouched.</p><p><code>gcp_kms</code>: Google Cloud resource path for crypto key to use for encryption and decryption.</p><h3>Step 3: Creating secret</h3><p>Encrypt a secret using sops</p><blockquote><p>☝ Assumption: You are allowed to access the GCE crypto key via &lt;a href=&quot;https://developers.google.com/identity/protocols/application-default-credentials&quot;&gt; Application Default Credentials&lt;/a&gt;</p></blockquote><pre><code>$ sops example-keypair.enc.yaml
# will open $EDITOR</code></pre><pre><code>ssh_keys:
    private_key: |
        -----BEGIN OPENSSH PRIVATE KEY-----
        b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZWQyNTUx
        OQAAACAmZvH7A4/vJzYZn+M6iHuMw0SKV6lvsHyisxLsOhYvowAAAIiUPTj8lD04/AAAAAtzc2gt
        ZWQyNTUxOQAAACAmZvH7A4/vJzYZn+M6iHuMw0SKV6lvsHyisxLsOhYvowAAAEDxeLqwYkmIHjtg
        NJhPn+7bt5UBQgC6LQRZ0PrPJHHw5SZm8fsDj+8nNhmf4zqIe4zDRIpXqW+wfKKzEuw6Fi+jAAAA
        AAECAwQF
        -----END OPENSSH PRIVATE KEY-----
    public_key: |
        ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICZm8fsDj+8nNhmf4zqIe4zDRIpXqW+wfKKzEuw6Fi+j</code></pre><p>with <code>encrypted_regex</code> provided in <code>.sops.yaml</code> this will ensure only the secret value of key <code>private_key</code> in the yaml file will be encrypted. This file is now safe to commit.</p><h3>Step 4: Consume secret in NixOS configuration.nix</h3><pre><code>{ config, ... }:
{
  # Setting up test user for service
  users.users.secret-test.isSystemUser = true;
  users.users.secret-test.group = &quot;secret-test&quot;;
  users.groups.secret-test = { };

  # Declare secret
  sops.secrets.&quot;ssh_keys/private_key&quot; = {
    # 1
    restartUnits = [ &quot;secret-test.service&quot; ]; # 2
    # Reference test user
    owner = config.users.users.secret-test.name;
    sopsFile = ./example-keypair.enc.yaml; # 3
  };

  systemd.services.secret-test = {
    wantedBy = [ &quot;multi-user.target&quot; ];
    after = [ &quot;sops-nix.service&quot; ]; # 4

    serviceConfig.Type = &quot;oneshot&quot;;
    # Reference test user
    serviceConfig.User = config.users.users.secret-test.name;

    script = &apos;&apos;
      # Reference secret by path convention
      stat /run/secrets/ssh_keys/private_key
    &apos;&apos;;
  };
}</code></pre><p>1. sops-nix will place nested yaml keys in nested directories in <code>/run/secrets/</code> . This way you are&nbsp;&nbsp;&nbsp; able to organize your secrets by service. But you are also free to define multiple secret files. 2. Reference services to restart if secret changes 3. Our encoded secret as a nix path. This is used as default but can also be overridden o 4. Ensure service starts after sops-nix service. The sops-nix service is responsible in decoding&nbsp;&nbsp;&nbsp; secrets and organizing them in <code>/run/secrets/</code></p><h3>Step 5: Deploy NixOS configuration</h3><p>Finally we deploy our new NixOS configuration to the machine in question, if locally via <code>nixos-rebuild</code> otherwise you can use any nix deployment framework like <a href="https://github.com/serokell/deploy-rs">deploy-rs</a> or NixOps. In this case I will use NixOps:</p><pre><code>$ nixops deploy --deployment &lt;machine-name&gt;</code></pre><p>This will build and activates the new NixOS configuration on the instance. During the activation/boot phase secrets will be decrypted by the systemd <code>nix-sops.service</code> to the <code>/run/secrets</code> folder.</p><pre><code>$ journalctl -u secret-test.service
systemd[1]: Starting secret-test.service...
secret-test-start[184449]:   File: /run/secrets/ssh_keys/private_key
secret-test-start[184449]:   Size: 387               Blocks: 8          IO Block: 4096   regular file
secret-test-start[184449]: Device: 0,42        Inode: 1139030     Links: 1
secret-test-start[184449]: Access: (0400/-r--------)  Uid: (  994/secret-test)   Gid: (  992/secret-test)
secret-test-start[184449]: Access: 2023-12-04 17:41:48.657466504 +0000
secret-test-start[184449]: Modify: 2023-12-04 17:41:48.657466504 +0000
secret-test-start[184449]: Change: 2023-12-04 17:41:48.657466504 +0000
secret-test-start[184449]:  Birth: -
systemd[1]: secret-test.service: Deactivated successfully.
systemd[1]: Finished secret-test.service.</code></pre><h2>Discussion</h2><p>Using sops-nix with NixOS allows us to directly encode and store our secrets where the rest of our configuration is stored. While it is <a href="https://www.reddit.com/r/NixOS/comments/11itax9/comment/jb0xhze/?utm_source=reddit&amp;utm_medium=web2x&amp;context=3">debatable</a> if secrets are configuration or state, storing secrets this way brings us several benefits:</p><ul><li>Simplified refactoring of configuration and secrets side by side.</li><li>Easier integration into pipelines.</li><li>Fine control of access, reducing attack surface.</li><li>Auditing either by cloud service or</li></ul><p><a href="https://github.com/getsops/sops#auditing">independently by sops</a>.</p><ul><li>Support for</li></ul><p><a href="https://cloud.google.com/security-key-management#section-10">Multi-Factor Authorization</a> (MFA) if&nbsp;&nbsp; supported by cloud service.</p><ul><li><a href="https://github.com/Mic92/sops-nix#templates">Template support</a> for interpolating secrets into</li></ul><p>configuration files via nix.</p><ul><li><a href="https://github.com/getsops/sops#48encrypting-only-parts-of-a-file">Partial file encryption</a>.</li><li><a href="https://fluxcd.io/flux/guides/mozilla-sops/">Flux 2.0 support</a>.</li></ul><h2>Business Impact &amp; Results</h2><blockquote><p>Key Outcomes: Implementation of this secret management system delivered measurable business value across security, operational efficiency, and team productivity.</p></blockquote><h3>Security &amp; Compliance</h3><ul><li><strong>100% elimination</strong> of plaintext secrets in version control</li><li><strong>Zero security incidents</strong> related to secret exposure since implementation</li></ul><h3>Operational Efficiency</h3><ul><li><strong>Secret rotation time</strong>: Single source of truth tied to the repository</li><li><strong>Deployment reliability</strong>: 95% reduction in deployment-related security incidents</li><li><strong>CI/CD pipeline setup</strong>: Decreased from 2-3 hours to 30 minutes for new services</li><li><strong>Configuration drift</strong>: Eliminated through declarative secret management</li></ul><h3>Team Productivity</h3><ul><li><strong>Developer onboarding</strong>: Reduced from days to sub day for secure access setup</li><li><strong>Environment consistency</strong>: Reduction in &quot;works on my machine&quot; secret-related issues</li><li><strong>Cross-team collaboration</strong>: Streamlined secret sharing with proper access controls</li></ul><h3>Cost Optimization</h3><ul><li><strong>Infrastructure costs</strong>: Reduction through optimized secret storage and access patterns</li><li><strong>Maintenance overhead</strong>: Less time spent on manual secret rotation and distribution</li><li><strong>Security tooling</strong>: Consolidated multiple secret management tools into a unified solution</li></ul><p>This SOPS-based approach not only solved our immediate technical challenges but transformed how our entire organization handles sensitive data, creating a foundation for secure, scalable DevOps practices.</p><h2>Additional References</h2><ul><li><a href="https://discourse.nixos.org/t/how-to-effectively-manage-secrets/25793">How to effectively manage secrets</a></li><li><a href="https://nixos.wiki/wiki/Comparison_of_secret_managing_schemes">Comparison of secret managing schemes - NixOS Wiki</a></li></ul>
          ]]></content:encoded>
          <media:thumbnail url="https://cdn.sanity.io/images/hvsy54ho/production/3fb4ef79d51d2e5bd5f465b51417eac9e5ee9ddb-1024x1024.png"/>
          <media:content medium="image" url="https://cdn.sanity.io/images/hvsy54ho/production/3fb4ef79d51d2e5bd5f465b51417eac9e5ee9ddb-1024x1024.png"/>
        </item>
      
  </channel>
</rss>