Tag: ArgoCD vs Flux

  • ArgoCD vs Flux 2025: Secure CI/CD for Kubernetes

    ArgoCD vs Flux 2025: Secure CI/CD for Kubernetes

    I run ArgoCD on my TrueNAS homelab for all container deployments. Every service I self-host — Gitea, Immich, monitoring stacks, even this blog’s CI pipeline — gets deployed through ArgoCD syncing from Git repos on my local Gitea instance. I’ve also deployed Flux for clients who wanted something lighter. After 12 years in Big Tech security engineering and thousands of hours operating both tools, here’s my honest comparison — not the sanitized vendor version, but what actually matters when you’re on-call at 2 AM and a deployment is stuck.

    Why This Comparison Still Matters in 2025

    “GitOps is just version control for Kubernetes.” If you’ve heard this, you’ve been sold a myth. GitOps is much more than syncing manifests to clusters — it’s a fundamentally different approach to how we manage infrastructure and applications. And in 2025, with Kubernetes still dominating container orchestration, ArgoCD and Flux remain the two main contenders.

    Supply chain attacks are up 742% since 2020 according to Sonatype’s latest report. SLSA compliance requirements are real. The executive order on software supply chain security means your GitOps tool isn’t just a convenience — it’s part of your compliance story. Choosing between ArgoCD and Flux isn’t just a features checklist; it’s a security architecture decision that affects your audit posture.

    My ArgoCD Setup: Real Configuration from My Homelab

    Let me show you exactly what I run. My TrueNAS server hosts a k3s cluster with ArgoCD managing everything. Here’s the actual Application manifest I use to deploy my Gitea instance — not a sanitized tutorial version, but real config with the patterns I’ve settled on after months of iteration:

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: gitea
      namespace: argocd
      labels:
        app.kubernetes.io/part-of: homelab
        environment: production
      finalizers:
        - resources-finalizer.argocd.argoproj.io
    spec:
      project: homelab-apps
      source:
        repoURL: https://gitea.192.168.0.62.nip.io/deployer/homelab-manifests.git
        targetRevision: main
        path: apps/gitea
        helm:
          releaseName: gitea
          valueFiles:
            - values.yaml
            - values-production.yaml
          parameters:
            - name: gitea.config.server.ROOT_URL
              value: "https://gitea.192.168.0.62.nip.io"
            - name: persistence.size
              value: "50Gi"
            - name: persistence.storageClass
              value: "truenas-iscsi"
      destination:
        server: https://kubernetes.default.svc
        namespace: gitea
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
          allowEmpty: false
        syncOptions:
          - CreateNamespace=true
          - PrunePropagationPolicy=foreground
          - PruneLast=true
          - ServerSideApply=true
        retry:
          limit: 3
          backoff:
            duration: 5s
            factor: 2
            maxDuration: 3m

    A few things to note about this config. The resources-finalizer ensures ArgoCD cleans up resources when you delete the Application — without it, you get orphaned pods and services cluttering your cluster. The selfHeal: true flag is critical: if someone manually kubectl edits a resource, ArgoCD reverts it to match Git. This is the real power of GitOps — Git is the single source of truth, not whatever someone typed at 3 AM during an incident.

    The ServerSideApply sync option is something I added after hitting CRD conflicts. Kubernetes server-side apply handles field ownership correctly, which matters when you have multiple controllers touching the same resources. If you’re running cert-manager, external-dns, or any other controller that modifies resources ArgoCD manages, enable this.

    Flux HelmRelease: The Equivalent Setup

    For comparison, here’s how the same Gitea deployment looks in Flux. I set this up for a client who wanted a lighter footprint — their single-cluster setup didn’t need ArgoCD’s overhead:

    ---
    apiVersion: source.toolkit.fluxcd.io/v1
    kind: GitRepository
    metadata:
      name: homelab-manifests
      namespace: flux-system
    spec:
      interval: 5m
      url: https://gitea.192.168.0.62.nip.io/deployer/homelab-manifests.git
      ref:
        branch: main
      secretRef:
        name: gitea-credentials
    ---
    apiVersion: helm.toolkit.fluxcd.io/v2
    kind: HelmRelease
    metadata:
      name: gitea
      namespace: gitea
    spec:
      interval: 30m
      chart:
        spec:
          chart: ./apps/gitea
          sourceRef:
            kind: GitRepository
            name: homelab-manifests
            namespace: flux-system
      values:
        gitea:
          config:
            server:
              ROOT_URL: "https://gitea.192.168.0.62.nip.io"
        persistence:
          size: 50Gi
          storageClass: truenas-iscsi
      install:
        createNamespace: true
        remediation:
          retries: 3
      upgrade:
        remediation:
          retries: 3
          remediateLastFailure: true
        cleanupOnFail: true
      rollback:
        timeout: 5m
        cleanupOnFail: true

    Notice the difference immediately: Flux splits the concern into two resources — a GitRepository source and a HelmRelease that references it. ArgoCD bundles everything into one Application manifest. Flux’s approach is more composable (you can reuse the same GitRepository across multiple HelmReleases), but ArgoCD’s single-resource model is easier to reason about when you’re scanning through a directory of manifests.

    The remediation blocks in Flux are the equivalent of ArgoCD’s retry policy. Flux’s rollback configuration is more explicit — you define exactly what happens on failure at each lifecycle stage (install, upgrade, rollback). ArgoCD handles this more automatically with selfHeal, which is simpler but gives you less granular control.

    Side-by-Side Feature Comparison

    After running both tools extensively, here’s my honest feature-by-feature breakdown. This isn’t marketing copy — it’s what I’ve observed in production:

    FeatureArgoCDFluxMy Verdict
    Web UIBuilt-in dashboard with real-time sync status, diff views, and log streamingNo native UI. Weave GitOps dashboard available as add-onArgoCD wins decisively
    Multi-clusterSingle instance manages all clusters via ApplicationSetDeploy controllers per-cluster, manage via GitArgoCD for centralized; Flux for resilience
    Helm SupportNative Helm rendering, parameters in Application specHelmRelease CRD with full lifecycle managementFlux has better Helm lifecycle hooks
    KustomizeNative support, automatic detectionNative support via Kustomization CRDTie — both excellent
    RBACBuilt-in RBAC with projects, roles, and SSO integrationKubernetes-native RBAC onlyArgoCD for enterprise, Flux for simplicity
    SecretsNative Vault, AWS SM, GCP SM integrationsSOPS, Sealed Secrets, external-secrets-operatorArgoCD easier out of box; Flux more flexible
    Notificationsargocd-notifications with Slack, Teams, webhook, emailFlux notification-controller with similar integrationsTie — both work well
    Image AutomationRequires Argo Image Updater (separate project)Built-in image-reflector and image-automation controllersFlux wins — native and mature
    Resource Footprint~500MB RAM for server + repo-server + controller~200MB RAM across all controllersFlux is significantly lighter
    Learning CurveLower — UI helps, single resource modelSteeper — multiple CRDs, CLI-first workflowArgoCD for onboarding new teams
    Drift DetectionReal-time with visual diff in UIPeriodic reconciliation (configurable interval)ArgoCD for immediate visibility
    OCI Registry SupportSupported since v2.8Native support for OCI artifacts as sourcesFlux pioneered this; both solid now

    Core Architecture: How They Differ

    Deployment Models

    ArgoCD runs as a standalone application inside your cluster. It watches Git repos and applies changes continuously. The declarative model makes debugging straightforward — you can see exactly what state ArgoCD thinks the cluster should be in versus what’s actually running.

    Flux takes a different approach. It’s a set of Kubernetes controllers that use native CRDs to manage deployments. Lighter footprint, tighter coupling with the cluster API. Less magic, more Kubernetes-native. If you’re the kind of engineer who thinks in terms of reconciliation loops and custom resources, Flux will feel natural.

    The UI gap is real and it’s the single biggest differentiator in practice. ArgoCD ships with a solid dashboard — application state, sync status, logs, diff views, and even a resource tree visualization that shows you the dependency graph of your entire deployment. Flux doesn’t have a native UI. You’re working with CLI tools or bolting on the Weave GitOps dashboard, which is functional but nowhere near as polished. For teams that need visual oversight — especially during incidents when multiple people are watching the same screen — this matters enormously.

    For multi-cluster setups, ArgoCD handles it from a single instance using its ApplicationSet controller. You define applications dynamically based on cluster labels or repo patterns. Flux requires deploying controllers in each cluster, which adds operational overhead but can be more resilient to control-plane failures — if your central ArgoCD instance goes down, every cluster is affected. With Flux’s distributed model, each cluster continues reconciling independently.

    Integration and CI/CD Pipeline Hooks

    ArgoCD is easier to get started with. Polished interface, straightforward setup, out-of-the-box support for Helm charts, Kustomize, and plain YAML. Flux has more moving parts during initial setup, but its GitOps Toolkit gives you modular control — you only install what you need.

    For CI/CD pipeline integration, ArgoCD supports webhooks from GitHub, GitLab, and Bitbucket — changes sync automatically on push. Flux relies on periodic polling or external triggers, which can introduce slight deployment delays. In my homelab, I have a Gitea webhook hitting ArgoCD’s API, so deployments start within seconds of a push. With Flux, the default 5-minute polling interval felt sluggish for development workflows.

    Security: How They Actually Stack Up

    Security isn’t a feature — it’s architecture. As someone who’s spent their career in security engineering, this is where I have the strongest opinions. Here’s where these tools diverge in ways that matter.

    Authentication and Authorization

    ArgoCD ships with its own RBAC system. You define granular permissions for users and service accounts directly in ArgoCD’s config. This is convenient but means you’re managing another RBAC layer on top of Kubernetes RBAC.

    Flux leans on Kubernetes-native RBAC entirely. No separate auth system — permissions flow through the same ServiceAccounts and Roles you already manage. Simpler in theory, but misconfigured Kubernetes RBAC is one of the most common production security gaps I see. I’ve audited dozens of clusters where the default service account had way too many permissions because someone copied a tutorial’s ClusterRoleBinding without understanding the implications.

    Secrets Management

    ArgoCD integrates directly with HashiCorp Vault, AWS Secrets Manager, and other external secret stores. Secrets stay encrypted at rest and in transit. For enterprise environments with existing secret management infrastructure, this is a natural fit.

    Flux uses Kubernetes Secrets by default but supports the Secrets Store CSI driver for external integrations. The setup requires more configuration, but it works. If you’re already running sealed-secrets or external-secrets-operator, Flux plugs in cleanly.

    Both handle secrets responsibly. ArgoCD’s built-in external manager support gives it an edge if you’re starting from scratch. On my homelab, I use external-secrets-operator with a simple file backend since I don’t need Vault’s complexity for a home setup — and that works equally well with both tools.

    Security Hardening: What I Actually Configure

    Here’s the security hardening checklist I apply to every ArgoCD installation. These aren’t theoretical recommendations — they’re configurations running on my homelab and at client sites right now.

    RBAC: Principle of Least Privilege

    ArgoCD’s RBAC is defined in its ConfigMap. Here’s my production policy that restricts developers to their own projects while giving the platform team broader access:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: argocd-rbac-cm
      namespace: argocd
    data:
      policy.default: role:readonly
      policy.csv: |
        # Platform team - full access to all projects
        p, role:platform-admin, applications, *, */*, allow
        p, role:platform-admin, clusters, *, *, allow
        p, role:platform-admin, repositories, *, *, allow
        p, role:platform-admin, logs, get, */*, allow
        p, role:platform-admin, exec, create, */*, allow
    
        # Developers - can sync and view their project only
        p, role:developer, applications, get, dev/*, allow
        p, role:developer, applications, sync, dev/*, allow
        p, role:developer, applications, action/*, dev/*, allow
        p, role:developer, logs, get, dev/*, allow
    
        # Read-only for everyone else
        p, role:viewer, applications, get, */*, allow
        p, role:viewer, logs, get, */*, allow
    
        # Group bindings (map SSO groups to roles)
        g, platform-team, role:platform-admin
        g, developers, role:developer
        g, stakeholders, role:viewer
      scopes: '[groups, email]'

    The key here is policy.default: role:readonly. Anyone who authenticates but doesn’t match a group mapping gets read-only access. This is the principle of least privilege — deny by default, grant explicitly. I’ve seen too many ArgoCD installations where the default policy is role:admin because that’s what the quickstart guide uses.

    SSO Integration with OIDC

    Running ArgoCD with local accounts is a security antipattern. Here’s how I configure OIDC with Keycloak (which also runs on my TrueNAS homelab):

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: argocd-cm
      namespace: argocd
    data:
      url: https://argocd.192.168.0.62.nip.io
      oidc.config: |
        name: Keycloak
        issuer: https://auth.192.168.0.62.nip.io/realms/homelab
        clientID: argocd
        clientSecret: $oidc.keycloak.clientSecret
        requestedScopes:
          - openid
          - profile
          - email
          - groups
        requestedIDTokenClaims:
          groups:
            essential: true
      # Disable local admin account after SSO is verified
      admin.enabled: "false"
      # Require accounts to use SSO
      accounts.deployer: apiKey

    The critical line is admin.enabled: "false". Once SSO is working, disable the local admin account. Every authentication should flow through your identity provider where you have MFA enforcement, session management, and audit logs. The only exception is the deployer service account that uses API keys for CI pipelines — and that account should have minimal permissions scoped to specific projects.

    Audit Logging and Monitoring

    ArgoCD emits audit events for every significant action — sync, rollback, app creation, RBAC changes. Here’s how I ship these to my monitoring stack:

    # argocd-notifications ConfigMap snippet
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: argocd-notifications-cm
      namespace: argocd
    data:
      trigger.on-sync-status-unknown: |
        - when: app.status.sync.status == 'Unknown'
          send: [slack-alert]
      trigger.on-health-degraded: |
        - when: app.status.health.status == 'Degraded'
          send: [slack-alert, webhook-pagerduty]
      trigger.on-sync-succeeded: |
        - when: app.status.operationState.phase in ['Succeeded']
          send: [slack-deploy-log]
      template.slack-alert: |
        message: |
          ⚠️ {{.app.metadata.name}} is {{.app.status.health.status}}
          Sync: {{.app.status.sync.status}}
          Revision: {{.app.status.sync.revision | truncate 8 ""}}
          Cluster: {{.app.spec.destination.server}}
      template.slack-deploy-log: |
        message: |
          ✅ {{.app.metadata.name}} synced successfully
          Revision: {{.app.status.sync.revision | truncate 8 ""}}
          Author: {{(call .repo.GetCommitMetadata .app.status.sync.revision).Author}}

    Every sync event gets logged to Slack with the commit author — so you always know who deployed what and when. The on-health-degraded trigger fires when something breaks post-deploy, which is often more useful than the sync notification itself. I also forward ArgoCD’s server logs to Loki for long-term retention and compliance auditing.

    For Flux, audit logging is handled differently. Since Flux uses Kubernetes events natively, you can capture everything through the Kubernetes audit log. This is architecturally cleaner — one audit system instead of two — but requires your cluster’s audit policy to be configured correctly, which is another thing most tutorials skip.

    Why I Chose ArgoCD for My Homelab

    After running both tools extensively, I standardized on ArgoCD for my personal infrastructure. Here’s my reasoning, and I’ll be honest about the tradeoffs:

    The UI sealed it. When I’m debugging a failed deployment at 11 PM, I don’t want to be running kubectl get events --sort-by=.lastTimestamp and piecing together what happened. ArgoCD’s dashboard shows me the entire resource tree, the diff between desired and live state, and the logs from the failing pod — all in one view. For a homelab where I’m the only operator, this visual feedback loop saves me hours every month.

    Gitea webhook integration is seamless. I push to Gitea, ArgoCD’s webhook receiver picks it up, and the sync starts within 2 seconds. With Flux, I’d be waiting up to 5 minutes for the next reconciliation cycle (or configuring additional webhook infrastructure). For a homelab where I’m iterating rapidly on configurations, that latency is frustrating.

    ApplicationSet is a game-changer for homelab sprawl. I run 15+ services on my cluster. With ApplicationSet, I define a pattern once and new services get picked up automatically when I add a directory to my manifests repo. No manual Application creation per service.

    The tradeoffs I accept:

    • Higher resource usage. ArgoCD uses ~500MB RAM on my cluster. Flux would use ~200MB. On a homelab with 32GB RAM, this doesn’t matter. On a resource-constrained edge device, it would.
    • Another RBAC system to manage. Since I’m the only user, ArgoCD’s RBAC is overkill. But the SSO integration means I can share dashboards with my study group without giving them kubectl access.
    • Single point of failure. If ArgoCD goes down, no deployments happen. Flux’s distributed model is more resilient. I mitigate this with ArgoCD HA mode (3 replicas) and a break-glass procedure for direct kubectl apply.
    • Image update automation is weaker. Flux’s image-reflector-controller is more mature than ArgoCD Image Updater. I work around this by triggering updates through CI commits to my manifests repo instead of automatic image tag detection.

    Vulnerability Scanning and Supply Chain Security

    ArgoCD can scan manifests and Helm charts for vulnerabilities before they reach production — flagging outdated dependencies and insecure configurations. Flux doesn’t offer native scanning but integrates with Trivy and Polaris to get the same results.

    Honestly, you should be running scanning in your CI pipeline regardless of which tool you pick. Don’t rely on your GitOps tool as your only security gate. I run Trivy in my Gitea Actions pipeline before manifests even reach the GitOps repo, and then ArgoCD’s resource hooks run a second pass with OPA/Gatekeeper policies. Defense in depth — the same principle that applies to every other security domain.

    Production Reality: What I’ve Seen

    Enterprise Deployments

    At a Fortune 500 client managing hundreds of microservices, ArgoCD’s multi-cluster dashboard was the thing that sold the platform team. They could see deployment status across regions at a glance and drill into failures fast. The operations team loved it — they went from 45-minute deployment debugging sessions to 5-minute ones.

    On a smaller team running Flux, the Kubernetes-native approach meant less context-switching. Everything was just more CRDs and kubectl. Engineers who lived in the terminal preferred it. Their deployment pipeline was faster to set up and required less maintenance.

    Rollback and Disaster Recovery

    One common mistake: nobody tests rollback until they need it in production. ArgoCD’s rollback is more intuitive — click a button in the UI or run argocd app rollback <app-name>. Flux rollback requires more manual steps: you need to revert the Git commit, push, and wait for reconciliation. For complex scenarios involving multiple dependent services, I’ve scripted Flux rollbacks with a shell wrapper that handles the Git operations.

    Test your rollback procedures in staging monthly. A failed rollback in production turns a bad deploy into extended downtime. I have a quarterly “chaos day” on my homelab where I intentionally break deployments and practice recovery — it’s caught configuration issues that would have been painful to discover during a real incident.

    Which One Should You Pick?

    Here’s my take after running both in production for years:

    Choose ArgoCD if: Your team is newer to GitOps, you need visual oversight, you’re managing multiple clusters from one control plane, you want built-in secret manager integrations, or you need to give non-kubectl stakeholders visibility into deployments.

    Choose Flux if: Your team is comfortable with Kubernetes internals, you want a lighter footprint, you prefer native CRDs over a separate UI layer, you need robust image automation, or you’re running resource-constrained clusters where every megabyte of RAM matters.

    Both tools are actively maintained, both have strong CNCF backing, and both will handle production workloads. The “wrong” choice is overthinking it — pick one and invest in your security posture around it. The security hardening practices I described above apply regardless of which tool you choose. GitOps is only as secure as the weakest link in your pipeline.

    If you want to see how I set up ArgoCD with Gitea for a self-hosted pipeline, I wrote a full walkthrough that covers the security configuration in detail. And if you’re hardening your Kubernetes cluster before deploying either tool, start with my Kubernetes security checklist — your GitOps tool inherits whatever security posture your cluster has.


    🛠️ Recommended Resources:

    Tools and books I’ve actually used while working with these tools:

    Get daily AI-powered market intelligence. Join Alpha Signal — free market briefs, security alerts, and dev tool recommendations.
    📋 Disclosure: Some links in this article are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I’ve personally used or thoroughly evaluated. This helps support orthogonal.info and keeps the content free.
Also by us: StartCaaS — AI Company OS · Hype2You — AI Tech Trends