DL
Back to Blog
UncategorizedFebruary 3, 2026·5 min read

GitOps Deployment with ArgoCD

How ArgoCD deploys FhirHub across environments using GitOps -- ApplicationSets for multi-environment management, sync waves for dependency ordering, and continuous reconciliation.

D

David Le

GitOps Deployment with ArgoCD

By David Le -- Part 17 of the FhirHub Series

Helm charts describe what should be running in Kubernetes. But who applies them? In a traditional setup, a CI pipeline runs helm upgrade at the end of a build. That works, but it means your pipeline has cluster credentials, deployments only happen when the pipeline runs, and there's no continuous reconciliation if someone manually edits a resource.

GitOps flips the model. Git is the single source of truth. An operator running inside the cluster watches the repository and syncs state continuously. This post covers how ArgoCD deploys FhirHub across dev, staging, and production environments.

Why ArgoCD?

ArgoCD vs. Flux vs. Manual kubectl

ToolUIMulti-ClusterApp of AppsSync Waves
kubectl applyNoneManualNoNo
FluxOptional (Weave)YesKustomizationNo
ArgoCDBuilt-inYesYesYes

ArgoCD won because of three features:

  1. ApplicationSet -- One manifest generates Applications for dev, staging, and prod. Adding an environment is adding 4 lines to a YAML list.
  2. Sync waves -- PostgreSQL deploys before HAPI FHIR, which deploys before the API. Order matters, and ArgoCD handles it with annotations.
  3. The UI -- Seeing the live state of every Kubernetes resource in a tree view is invaluable for debugging. Flux doesn't have this built-in.

Why Not Flux?

Flux is technically capable of everything ArgoCD does. It's lighter weight and follows a more "Kubernetes-native" approach with CRDs for each concern (GitRepository, Kustomization, HelmRelease). For FhirHub, ArgoCD's advantages were:

  • Visual debugging -- The ArgoCD UI shows the entire resource tree (Application → Deployment → ReplicaSet → Pod). When a pod fails, you can see the events and logs without leaving the browser. Flux requires Weave GitOps (separate install) or kubectl.
  • Sync waves -- ArgoCD supports argocd.argoproj.io/sync-wave annotations that control deployment order. Flux uses Kustomization dependencies, which work differently and require more configuration.
  • Application of Applications pattern -- A root ArgoCD Application that manages child Applications. This makes multi-environment deployment declarative at the ArgoCD level.

Why Not Pipeline-Driven Deployment?

Deployment ModelContinuous SyncDrift DetectionCluster Creds in CI
Pipeline (helm upgrade)No (only on push)NoYes (security risk)
ArgoCD pull-basedYes (every 3 min)Yes (self-heal)No (runs in cluster)

Pipeline-driven deployment means your CI system needs cluster credentials. If your GitHub Actions secrets are compromised, an attacker has direct access to your Kubernetes cluster. ArgoCD runs inside the cluster and pulls from Git -- no external system has cluster admin access.

Multi-Environment with ApplicationSet

# argocd/applicationset.yaml
spec:
  generators:
    - list:
        elements:
          - env: dev
            namespace: fhirhub-dev
            valuesFile: values-dev.yaml
            targetRevision: main
          - env: staging
            namespace: fhirhub-staging
            valuesFile: values-staging.yaml
            targetRevision: main
          - env: prod
            namespace: fhirhub-prod
            valuesFile: values-prod.yaml
            targetRevision: release

Each environment gets its own namespace, values file, and git branch. Dev and staging track main. Production tracks the release branch. Promoting to production means merging main into release.

Why List Generator vs. Git Generator?

GeneratorSource of TruthDynamic DiscoverySimplicity
ListHardcoded in YAMLNoSimple and explicit
Git directoryDirectory structureYesComplex setup
ClusterCluster labelsYesRequires pre-labeled clusters
Pull requestOpen PRsYesPreview environments

The list generator is explicit -- you see every environment in one place. Git directory generators are useful when you have dozens of environments that follow a naming convention, but FhirHub has three. Explicitness is more valuable than dynamism here.

App of Apps Pattern

A root Application manages all child Applications:

# argocd/app-of-apps.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: fhirhub-root
  namespace: argocd
spec:
  source:
    repoURL: https://github.com/Le-Portfolio/davidle-portfolio-web/FhirHub.git
    path: argocd
    targetRevision: main
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Applying this single Application causes ArgoCD to discover and manage the ApplicationSet, which generates Applications for dev, staging, and prod. One kubectl apply bootstraps the entire deployment.

Checkpoint: Verify ArgoCD Installation

Before continuing, verify ArgoCD is running in your cluster:

kubectl get pods -n argocd

Expected output:

  • Should show argocd-server, argocd-repo-server, argocd-application-controller, and argocd-redis all in Running state
kubectl get svc -n argocd

Expected output:

  • Should show the argocd-server service

Retrieve the admin password and access the UI:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d
kubectl port-forward svc/argocd-server -n argocd 8443:443

Expected output:

  • Open https://localhost:8443 in your browser -- you should see the ArgoCD login page (accept the self-signed certificate warning). Log in with username admin and the password from the previous command

If something went wrong:

  • If the argocd namespace doesn't exist, ArgoCD hasn't been installed. Run make k8s-create or install ArgoCD manually: kubectl create namespace argocd && kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
  • If the secret doesn't exist, ArgoCD may still be initializing -- wait 30 seconds and retry

Sync Policy

syncPolicy:
  automated:
    prune: true
    selfHeal: true
  retry:
    limit: 5
    backoff:
      duration: 5s
      factor: 2
      maxDuration: 3m
  • prune: true -- Deleting a resource from Git deletes it from the cluster. Without this, removed Services or ConfigMaps would linger forever.
  • selfHeal: true -- Manual kubectl edits are reverted to match Git. This prevents configuration drift from emergency hotfixes that never make it back to Git.
  • Retry with exponential backoff -- Transient failures (e.g., CRD not yet installed, temporary API server timeout) resolve themselves. The backoff prevents hammering a failing resource.

Why Enable Self-Heal?

Self-Heal SettingManual EditsDriftAudit
DisabledPersist until next syncCan accumulatePartial
EnabledReverted within sync intervalEliminatedComplete

Self-heal can be controversial. Some teams want the ability to hotfix in production via kubectl. But that creates drift -- the cluster state no longer matches Git, and the next deployment might overwrite the fix. With self-heal enabled, all changes go through Git. The audit trail is complete, and there are no surprises during the next sync.

If you need to hotfix, commit to the release branch. ArgoCD syncs within 3 minutes.

Sync Waves

ArgoCD sync waves control deployment order via annotations:

metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "0"  # Deploy first

FhirHub uses these waves:

WaveResourcesReason
0Secrets, ConfigMapsConfiguration must exist before pods reference them
1PostgreSQL StatefulSetsDatabases must be ready before applications connect
2HAPI FHIR, KeycloakInfrastructure services depend on databases
3FhirHub APIAPI depends on HAPI FHIR and Keycloak
4FhirHub FrontendFrontend depends on API and Keycloak
5Monitoring (ServiceMonitors)Scraping starts after services are running

Without sync waves, ArgoCD applies everything in parallel. PostgreSQL and the API would start simultaneously, and the API would crash-loop until PostgreSQL is ready. Sync waves eliminate that startup race.

Checkpoint: Verify ApplicationSet and Sync

Before continuing, verify the ArgoCD applications are created and syncing:

kubectl apply -f argocd/app-of-apps.yaml
kubectl apply -f argocd/applicationset.yaml
kubectl get applications -n argocd

Expected output:

  • Should show fhirhub-dev, fhirhub-staging, and fhirhub-prod applications
kubectl get applications -n argocd fhirhub-dev -o jsonpath='{.status.sync.status}'

Expected output:

  • Should print Synced, meaning the cluster state matches Git
kubectl get applications -n argocd fhirhub-dev -o jsonpath='{.status.health.status}'

Expected output:

  • Should print Healthy, meaning all resources are running correctly

In the ArgoCD UI (https://localhost:8443), click fhirhub-dev -- the resource tree should show Deployments, StatefulSets, Services, and Ingress all with green status indicators.

If something went wrong:

  • If applications show OutOfSync, check for error details: kubectl get applications -n argocd fhirhub-dev -o jsonpath='{.status.conditions}'
  • If health is Degraded, one or more pods aren't ready -- check kubectl get pods -n fhirhub-dev and investigate failing pods
  • If applications don't appear, verify the argocd/applicationset.yaml references the correct Git repository URL and path

Environment Promotion Flow

Developer pushes to main
  │
  ├── GitHub Actions builds + pushes images
  │   └── Updates helm/fhirhub/values.yaml with new SHA tag
  │
  ├── ArgoCD syncs dev (watches main, auto-sync)
  │
  ├── ArgoCD syncs staging (watches main, auto-sync)
  │
  └── Team merges main → release
      └── ArgoCD syncs prod (watches release, auto-sync)

Dev and staging get every commit automatically. Production only advances when someone explicitly merges to the release branch. This gives the team a manual gate for production while keeping dev/staging fully automated.

What's Next

In Part 18, we'll add observability to the deployed cluster -- Prometheus metrics from the .NET API, Grafana dashboards for request rates and latency, Loki for centralized log aggregation, and alerting rules that fire when services go down.


Find the source code on GitHub Connect on LinkedIn

Related Projects

Featured

FhirHub

A healthcare data management platform built on the HL7 FHIR R4 standard, providing a comprehensive web interface for managing patient clinical data including vitals, conditions, medications, lab orders, and bulk data exports with role-based access control and full audit logging.

Next.js 16
React 19
Typescript
Tailwind CSS 4
+8