<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>PipeCD – LFX Mentorship</title>
    <link>https://pipecd.dev/tags/lfx-mentorship/</link>
    <description>Recent content in LFX Mentorship on PipeCD</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Fri, 10 Apr 2026 00:00:00 +0000</lastBuildDate>
    
	  <atom:link href="https://pipecd.dev/tags/lfx-mentorship/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Blog: Building the Kubernetes Multi-Cluster Plugin for PipeCD — LFX Mentorship</title>
      <link>https://pipecd.dev/blog/2026/04/10/building-the-kubernetes-multi-cluster-plugin-for-pipecd-lfx-mentorship/</link>
      <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://pipecd.dev/blog/2026/04/10/building-the-kubernetes-multi-cluster-plugin-for-pipecd-lfx-mentorship/</guid>
      <description>
        
        
        &lt;p&gt;If you had told me last year that I would be working with Kubernetes and all things clusters, deployments and service meshes, I would have brushed it off. I am truly grateful for the journey thus far.&lt;/p&gt;
&lt;p&gt;Earlier last month, I got accepted as an LFX Mentee for Term 1 of this calendar year. For me it is such a big deal, given my background, and how much effort has been put in behind the scenes to get to this stage.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m currently a mentee in the LFX Mentorship program working on &lt;a href=&#34;https://pipecd.dev&#34;&gt;PipeCD&lt;/a&gt;, an open-source GitOps continuous delivery platform. For the past four weeks, I&amp;rsquo;ve been building out the &lt;code&gt;kubernetes_multicluster&lt;/code&gt; plugin specifically implementing the deployment pipeline stages that handle canary, primary and baseline deployments across multiple clusters.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;what-is-pipecd-and-what-is-this-plugin&#34;&gt;What is PipeCD and what is this plugin?&lt;/h2&gt;
&lt;p&gt;PipeCD is an open-source GitOps CD platform that manages deployments across different infrastructure targets like Kubernetes, ECS, Terraform, Lambda and more. Each target type has a plugin that knows how to deploy to it.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;kubernetes_multicluster&lt;/code&gt; plugin is for teams running the same application across multiple Kubernetes clusters say US, EU and Asia and needing all of them to stay in sync through a single pipeline. Rolling out a new version across clusters one at a time, manually, with no coordination, is error-prone and slow. The plugin lets you define one pipeline that runs across every cluster at the same time, with canary and baseline checks before anything hits production.&lt;/p&gt;
&lt;h2 id=&#34;progressive-delivery-and-why-these-stages-exist&#34;&gt;Progressive Delivery and Why These Stages Exist&lt;/h2&gt;
&lt;p&gt;Before a new version reaches all users, it goes through stages. A canary sends a small slice of traffic to the new version first. A baseline runs the &lt;em&gt;current&lt;/em&gt; version at the same scale so you have a fair comparison. Primary is the actual promotion. Clean stages remove the temporary resources when you&amp;rsquo;re done.&lt;/p&gt;
&lt;p&gt;This pattern is called progressive delivery, because you roll out gradually, check things look good, then commit. If something looks wrong at the canary stage, you stop there. Nothing has touched production yet.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;kubernetes_multicluster&lt;/code&gt; plugin runs all of this across every cluster at the same time. One pipeline, every cluster, same stages.&lt;/p&gt;
&lt;p&gt;A full pipeline looks like this:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;stages&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;K8S_CANARY_ROLLOUT&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;K8S_BASELINE_ROLLOUT&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;K8S_TRAFFIC_ROUTING&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;K8S_PRIMARY_ROLLOUT&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;K8S_CANARY_CLEAN&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;K8S_BASELINE_CLEAN&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8;text-decoration:underline&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Each of these is a stage I built. The sections below go through what each one does.&lt;/p&gt;
&lt;h2 id=&#34;what-i-built&#34;&gt;What I Built&lt;/h2&gt;
&lt;h3 id=&#34;k8s_canary_rollout&#34;&gt;K8S_CANARY_ROLLOUT&lt;/h3&gt;
&lt;p&gt;The canary stage deploys the new version of your app as a small slice alongside the existing production deployment. If your app normally runs 3 pods, canary might spin up 1 pod (or 20%) of the new version enough to catch problems without affecting most users.&lt;/p&gt;
&lt;p&gt;It loads manifests from Git, creates copies of all workloads with a &lt;code&gt;-canary&lt;/code&gt; suffix, scales them down to the configured replica count, adds a &lt;code&gt;pipecd.dev/variant=canary&lt;/code&gt; label, and applies them to every target cluster in parallel. The original deployment is never touched this stage only ever adds resources.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/swcu1ppt38ltw87wwbol.png&#34; alt=&#34;Canary rollout stage log applying manifests to cluster-eu and cluster-us&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0lfhrddbu6r3mt01tlrs.png&#34; alt=&#34;Canary rollout success — deploy targets: cluster-eu &amp;#43; cluster-us&#34;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id=&#34;k8s_canary_clean&#34;&gt;K8S_CANARY_CLEAN&lt;/h3&gt;
&lt;p&gt;Once the canary window is over, whether you promoted or rolled back, the canary pods are just sitting in every cluster doing nothing. &lt;code&gt;K8S_CANARY_CLEAN&lt;/code&gt; removes them.&lt;/p&gt;
&lt;p&gt;It finds all resources with the label &lt;code&gt;pipecd.dev/variant=canary&lt;/code&gt; for the application and deletes them in order: Services first, then Deployments, then everything else. The order matters as you don&amp;rsquo;t want to remove the Deployment while the Service is still sending traffic to it.&lt;/p&gt;
&lt;p&gt;One thing worth noting: the query is scoped strictly to canary-labelled resources. Even if something goes wrong in the deletion logic, it cannot touch primary resources.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vzb1fikt47ax3z53bjy.png&#34; alt=&#34;K8S_CANARY_CLEAN stage log deleting simple-canary resources from both clusters&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5980a6b46dtiqv9oetz7.png&#34; alt=&#34;K8S_CANARY_ROLLOUT → K8S_CANARY_CLEAN pipeline — both stages green on cluster-eu and cluster-us&#34;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id=&#34;k8s_primary_rollout&#34;&gt;K8S_PRIMARY_ROLLOUT&lt;/h3&gt;
&lt;p&gt;After the canary looks good, you promote the new version to primary, the workload actually serving all your users. This stage takes the manifests from Git, adds the &lt;code&gt;pipecd.dev/variant=primary&lt;/code&gt; label, and applies them across all clusters in parallel.&lt;/p&gt;
&lt;p&gt;It also has a &lt;code&gt;prune&lt;/code&gt; option: after applying, it checks what&amp;rsquo;s currently running in the cluster against what was just applied, and deletes anything that&amp;rsquo;s no longer in Git. Useful when you remove a resource from your manifests and want the cluster to reflect that.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5i4pfqi10ef38i7ltn55.png&#34; alt=&#34;K8S_PRIMARY_ROLLOUT success deploy targets: cluster-eu &amp;#43; cluster-us&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbi6iplj1l8g5wbkwgql.png&#34; alt=&#34;kubectl confirming simple 2/2 updated in both cluster-eu and cluster-us&#34;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id=&#34;k8s_baseline_rollout&#34;&gt;K8S_BASELINE_ROLLOUT&lt;/h3&gt;
&lt;p&gt;This one took me a while to understand and it is the stage I find most interesting to explain as well.&lt;/p&gt;
&lt;p&gt;When you&amp;rsquo;re running a canary, the natural thing is to compare it against primary. The issue is that&amp;rsquo;s not a fair comparison primary is handling far more traffic than canary, under different conditions.&lt;/p&gt;
&lt;p&gt;Baseline gives you a fairer comparison. You take the &lt;em&gt;current&lt;/em&gt; version (not the new one) and run it at the same scale as canary. Now your cluster has:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-plaintext&#34; data-lang=&#34;plaintext&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;simple             2/2   ← production, current version
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;simple-canary      1/1   ← new version, being tested
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;simple-baseline    1/1   ← current version at canary scale
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You compare canary vs baseline, same number of pods, same traffic conditions. If canary is worse, it&amp;rsquo;s obvious.&lt;/p&gt;
&lt;p&gt;The key difference from every other rollout stage is one line of code. Canary and primary load manifests from the new Git commit (&lt;code&gt;TargetDeploymentSource&lt;/code&gt;). Baseline loads from what&amp;rsquo;s currently running (&lt;code&gt;RunningDeploymentSource&lt;/code&gt;):&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// canary.go — new version&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000&#34;&gt;manifests&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;err&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;:=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;p&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;loadManifests&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;ctx&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;&amp;amp;&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;input&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Request&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;TargetDeploymentSource&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// baseline.go — current version&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000&#34;&gt;manifests&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;err&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;:=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;p&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;loadManifests&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;ctx&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;&amp;amp;&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;input&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Request&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;RunningDeploymentSource&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a61aeapcwnqquh3v3vdh.png&#34; alt=&#34;K8S_BASELINE_ROLLOUT stage log loading manifests from running deployment source&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r26i800uc46kfmauo5p.png&#34; alt=&#34;K8S_BASELINE_ROLLOUT&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y6lowk38efysvvy0vbmk.png&#34; alt=&#34;K8S_BASELINE_ROLLOUT&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8n8ailmbo0gae31sf58l.png&#34; alt=&#34;kubectl showing simple, simple-baseline, simple-canary all running in both clusters&#34;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id=&#34;k8s_baseline_clean&#34;&gt;K8S_BASELINE_CLEAN&lt;/h3&gt;
&lt;p&gt;Once the analysis is done, baseline resources get cleaned up the same way as canary find everything labelled &lt;code&gt;pipecd.dev/variant=baseline&lt;/code&gt; and delete it in order. No configuration needed. It doesn&amp;rsquo;t matter whether &lt;code&gt;createService: true&lt;/code&gt; was set during rollout, it finds whatever is there and removes it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddfhlxgi1i40r0x8dgh2.png&#34; alt=&#34;K8S_BASELINE_CLEAN&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fsxadrvt48rbr4hi113m.png&#34; alt=&#34;K8S_BASELINE_CLEAN stage log deleting baseline resources from both clusters&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apa1tdqa751bgl2t9g6c.png&#34; alt=&#34;K8S_BASELINE_CLEAN stage log deleting baseline resources from both clusters&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rder7h0ylhxn0wkaykep.png&#34; alt=&#34;K8S_BASELINE_CLEAN&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8epcyha332ir2xz3jhua.png&#34; alt=&#34;kubectl confirming no baseline resources remain in cluster-eu or cluster-us&#34;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id=&#34;k8s_traffic_routing&#34;&gt;K8S_TRAFFIC_ROUTING&lt;/h3&gt;
&lt;p&gt;Canary and baseline pods exist in the cluster but get no traffic until this stage runs. Without it, you&amp;rsquo;re analysing pods that nobody is actually hitting. This stage is what sends real user traffic to them.&lt;/p&gt;
&lt;p&gt;Two methods are supported:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PodSelector&lt;/strong&gt; (no service mesh needed): changes the Kubernetes Service selector to point at one variant. All-or-nothing 100% to canary or 100% back to primary.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7uqb18dwo6jyhr0aekpw.png&#34; alt=&#34;PodSelector traffic routing full pipeline success across cluster-eu and cluster-us&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qddkmg9qs6unwk0bklu3.png&#34; alt=&#34;PodSelector&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/piwbmpywrxmf4ykbcomp.png&#34; alt=&#34;PodSelector&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2u1kqy9jasqhqwtwebbd.png&#34; alt=&#34;PodSelector&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Istio&lt;/strong&gt;: updates VirtualService route weights to split traffic across all three variants at once for example, primary 80%, canary 10%, baseline 10%. Also supports &lt;code&gt;editableRoutes&lt;/code&gt; to limit which named routes the stage is allowed to modify.&lt;/p&gt;
&lt;p&gt;One small thing I added on top of the traffic routing stage: per-route logging. When the stage runs, it now logs each route it processes whether it was skipped (because it&amp;rsquo;s not in &lt;code&gt;editableRoutes&lt;/code&gt;) or updated with new weights. Before this, the log just said &amp;ldquo;Successfully updated traffic routing&amp;rdquo; with no detail. Now you can see exactly which routes changed and to what percentages, which is useful when debugging a misconfigured VirtualService.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/498uvcytrlppjpxqcq05.png&#34; alt=&#34;Istio traffic routing stage log per-route logging showing which routes were updated in both clusters&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ql3nmfb19psi5gfu8a2.png&#34; alt=&#34;Istio&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oxy1gv5cojotd6ks253m.png&#34; alt=&#34;Istio&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tva9uwcmd7prb6qf72dm.png&#34; alt=&#34;Istio&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6oj4elm5asfgergkp2se.png&#34; alt=&#34;Full Istio pipeline, all 7 stages green on cluster-eu and cluster-us&#34;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;something-i-found-interesting&#34;&gt;Something I Found Interesting&lt;/h2&gt;
&lt;p&gt;The thing that surprised me was how &lt;code&gt;errgroup&lt;/code&gt; handles running across multiple clusters without much extra code.&lt;/p&gt;
&lt;p&gt;Every stage needs to run against N clusters, not one. A simple for-loop would run them one at a time slow, and if cluster 2 fails you don&amp;rsquo;t find out until cluster 1 is already done.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;errgroup&lt;/code&gt; runs all clusters at the same time and returns the first error:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000&#34;&gt;eg&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;ctx&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;:=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;errgroup&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;WithContext&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;ctx&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;for&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;_&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;tc&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;:=&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;range&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;targetClusters&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;tc&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;:=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;tc&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;eg&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Go&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;func&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;error&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;return&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;canaryRollout&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;ctx&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;tc&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;deployTarget&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;return&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;eg&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Wait&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;All clusters run in parallel. If any one fails, the stage fails immediately. The same pattern is used across every stage, so adding a new stage is mostly just writing the per-cluster logic the concurrency part is already solved.&lt;/p&gt;
&lt;h2 id=&#34;whats-next&#34;&gt;What&amp;rsquo;s Next&lt;/h2&gt;
&lt;p&gt;The next piece is &lt;code&gt;DetermineStrategy&lt;/code&gt;, that is the logic that decides what kind of deployment to trigger based on what changed in Git. After that, livestate drift detection so PipeCD can flag when a cluster has drifted from what Git says it should be.&lt;/p&gt;
&lt;p&gt;To get involved, check out the PipeCD project and come join us on Slack.&lt;/p&gt;
&lt;h2 id=&#34;links&#34;&gt;Links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/pipe-cd/pipecd&#34;&gt;PipeCD repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://mentorship.lfx.linuxfoundation.org&#34;&gt;LFX Mentorship Program&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/pipe-cd/pipecd/issues/6446&#34;&gt;Issue #6446, kubernetes_multicluster plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/pipe-cd/pipecd/pull/6629&#34;&gt;PR #6629 K8S_TRAFFIC_ROUTING&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/pipe-cd/pipecd/pull/6648&#34;&gt;PR #6648 Per-route logging in K8S_TRAFFIC_ROUTING&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://app.slack.com/client/T08PSQ7BQ/C01B27F9T0X&#34;&gt;Slack #PipeCD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Blog: My First 30 days as an LFX Mentee with PipeCD</title>
      <link>https://pipecd.dev/blog/2026/04/08/my-first-30-days-as-an-lfx-mentee-with-pipecd/</link>
      <pubDate>Wed, 08 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://pipecd.dev/blog/2026/04/08/my-first-30-days-as-an-lfx-mentee-with-pipecd/</guid>
      <description>
        
        
        &lt;p&gt;A month ago, I started my journey as an LFX Mentee with PipeCD.&lt;/p&gt;
&lt;p&gt;Coming from a non-technical background, the cloud native ecosystem is relatively new to me; I’ve been outside looking in. Right now, I’m working to establish a social media presence for PipeCD, create content covering v1 features, plugin development, and walkthrough videos that make the project easier to adopt.&lt;/p&gt;
&lt;p&gt;To effectively do that, my technical knowledge needs to be sharpened. So I’m learning Linux basics and Kubernetes to enable me to understand how PipeCD v1 works, the plugin architecture, and migration from v0 to v1.&lt;/p&gt;
&lt;p&gt;This is my first contact with cloud native technology. I aim to document my journey, including what I’m working on, learning, and everything in between&lt;/p&gt;
&lt;h2 id=&#34;getting-started-with-lfx-and-pipecd&#34;&gt;Getting started with LFX and PipeCD&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&#34;https://github.com/cncf/mentoring/blob/main/programs/lfx-mentorship/README.md#program-guidelines&#34;&gt;LFX Mentorship program&lt;/a&gt; provides opportunities to contribute to open source projects while learning from experienced maintainers.&lt;/p&gt;
&lt;p&gt;Through this program, I joined PipeCD as a mentee for the Community building, Technical content, and Social media growth project.&lt;/p&gt;
&lt;p&gt;PipeCD is an open-source continuous delivery solution built around GitOps to enable engineers deploy multiple application kinds across multi-cloud environments.&lt;/p&gt;
&lt;p&gt;With the release of PipeCD v1, the project is evolving to be more flexible and extensible through its plugin-based architecture. This makes it easier for teams to integrate PipeCD into their existing workflows and environments, rather than having to change everything to adopt it.&lt;/p&gt;
&lt;p&gt;At a high level, PipeCD connects to your application and deployment configuration, then manages how changes are rolled out; gradually, all at once, or in controlled stages. It provides teams with visibility into deployments, along with the ability to monitor, verify, and roll back changes as needed.&lt;/p&gt;
&lt;p&gt;If you’re a DevOps or platform engineer seeking more control and flexibility in production, consider PipeCD. It’s designed for safer, more controlled continuous delivery, allowing you to ship fast with confidence.&lt;/p&gt;
&lt;p&gt;The [documentation] (&lt;a href=&#34;https://pipecd.dev/docs-v1.0.x/&#34;&gt;https://pipecd.dev/docs-v1.0.x/&lt;/a&gt;) is a great place to start and get a picture of how it all fits together.&lt;/p&gt;
&lt;h2 id=&#34;what-ive-been-up-to&#34;&gt;What I’ve been up to:&lt;/h2&gt;
&lt;p&gt;My focus in the past month has been getting fundamental knowledge on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Linux and Kubernetes&lt;/li&gt;
&lt;li&gt;Computer networking (The OSI model)&lt;/li&gt;
&lt;li&gt;Encryption and decryption&lt;/li&gt;
&lt;li&gt;Cloud compute&lt;/li&gt;
&lt;li&gt;IaaS, SaaS, PaaS, FaaS, and more&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I’m intentionally building this foundation because understanding these concepts is key to fully grasping how PipeCD works.&lt;/p&gt;
&lt;p&gt;The purpose is not only to learn, but to communicate this clearly to developers and create more detailed videos.&lt;/p&gt;
&lt;p&gt;Beyond learning, I actively engage with the community, welcoming new members, reviewing issues, and PRs on GitHub. Essentially, creating a feedback loop between contributors and maintainers&lt;/p&gt;
&lt;p&gt;I also create content, post regularly on social media platforms, and host community meetings.&lt;/p&gt;
&lt;p&gt;PipeCD community is diverse and inclusive. The Maintainers and contributors are kind, helpful, and supportive. Early on, even before I got selected as a mentee, I got comfortable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;following conversations on Slack&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Understanding community needs and&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;familiarizing myself with the project structure&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It was less about immediate contribution and more about learning how things work.&lt;/p&gt;
&lt;h2 id=&#34;reflections-and-next-step&#34;&gt;Reflections and next step:&lt;/h2&gt;
&lt;p&gt;The past month has been a really interesting experience; quite challenging, but very insightful.&lt;/p&gt;
&lt;p&gt;A perfect blend of what I’m good at (community work) and what I want to learn (cloud native technologies). In the coming days, I will set up PipeCD locally on my machine and explore the technicalities.&lt;/p&gt;
&lt;p&gt;There’s still a lot I’m figuring out, work to do, and I’m up for a challenge.&lt;/p&gt;
&lt;p&gt;Progress isn’t yet about big milestones but small, consistent steps. building systems and structures to help me better navigate this space.&lt;/p&gt;
&lt;p&gt;As I grow into my role as a PipeCD community manager, concepts are gradually becoming clearer, and I’m learning how everything connects. My goal is to ultimately become a maintainer long-term.&lt;/p&gt;
&lt;p&gt;PipeCD welcomes contributors from all around the world, irrespective of your background. Technical or non-technical, there’s room to learn, work, and make an impact.&lt;/p&gt;
&lt;p&gt;To get involved, check out the &lt;a href=&#34;https://github.com/pipe-cd/pipecd&#34;&gt;PipeCD project&lt;/a&gt; and come join us on [Slack] (&lt;a href=&#34;https://app.slack.com/client/T08PSQ7BQ/C01B27F9T0X&#34;&gt;https://app.slack.com/client/T08PSQ7BQ/C01B27F9T0X&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Kindly stay connected by following PipeCD on [LinkedIn] (&lt;a href=&#34;https://www.linkedin.com/company/pipecd/&#34;&gt;https://www.linkedin.com/company/pipecd/&lt;/a&gt;) and subscribing to our &lt;a href=&#34;https://youtube.com/@pipe-cd?si=gN28s9W0Ce9hjgVB&#34;&gt;YouTube&lt;/a&gt; Channel.&lt;/p&gt;
&lt;p&gt;See you in my next drop.&lt;/p&gt;
&lt;p&gt;Gloriah.&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
