DL
Back to Blog
UncategorizedFebruary 3, 2026·5 min read

Kubernetes Packaging with Helm Charts

How FhirHub is packaged for Kubernetes using Helm -- an umbrella chart with sub-charts for each service, a library chart to eliminate duplication, and per-environment values files.

D

David Le

Kubernetes Packaging with Helm Charts

By David Le -- Part 16 of the FhirHub Series

Docker Compose works for local development and single-server deployments, but Kubernetes is where healthcare applications need to run in production. Kubernetes provides auto-scaling, self-healing, rolling updates, and the operational guarantees that clinical systems require.

This post covers how I packaged FhirHub for Kubernetes using Helm charts -- an umbrella chart with sub-charts for each service, a library chart that eliminates template duplication, and per-environment values files that cleanly separate configuration concerns.

Why Helm?

Helm vs. Kustomize vs. Raw Manifests vs. CDK8s

ToolTemplatingPackagingMulti-EnvEcosystem
Raw YAMLNoneNoneCopy-pasteN/A
KustomizePatches/overlaysBuiltin to kubectlGoodGrowing
HelmGo templatesCharts + reposValues filesMature
CDK8sReal programming languagesSynthesized YAMLCode-levelNewer

Helm won for three reasons:

  1. Umbrella charts -- A single helm install deploys the entire stack. Sub-charts for each service keep things modular.
  2. Values files per environment -- values-dev.yaml, values-staging.yaml, values-prod.yaml cleanly separate environment concerns without patching.
  3. Library charts -- Shared templates avoid copy-pasting Deployment, Service, and Ingress boilerplate across five sub-charts.

Kustomize is simpler for basic overlays, but FhirHub has 6 services with different configurations. Values-based templating is more natural here than patch-based overlays. Raw YAML requires duplicating every manifest per environment. CDK8s is powerful but adds a compilation step and a new language to learn -- overkill when Go templates suffice.

Chart Structure

helm/
  fhirhub-lib/                    # Library chart (shared templates)
    templates/
      _deployment.tpl             # Reusable Deployment template
      _service.tpl                # Reusable Service template
      _ingress.tpl                # Reusable Ingress template
      _hpa.tpl                    # HorizontalPodAutoscaler
      _pdb.tpl                    # PodDisruptionBudget
      _servicemonitor.tpl         # Prometheus ServiceMonitor
      _helpers.tpl                # Labels, names, selectors
  fhirhub/                        # Umbrella chart
    Chart.yaml                    # Dependencies on all sub-charts
    values.yaml                   # Default values
    values-dev.yaml               # Dev overrides
    values-staging.yaml           # Staging overrides
    values-prod.yaml              # Production overrides
    charts/
      fhirhub-api/                # 12 templates
      fhirhub-frontend/           # 11 templates
      hapi-fhir/                  # 8 templates
      keycloak/                   # 8 templates (StatefulSet)
      postgresql/                 # 7 templates (StatefulSet, reused twice)

Why Umbrella Chart vs. Flat Charts?

PatternSingle InstallShared ConfigDependency Order
Flat charts (one per service)No (5 installs)ManualManual
Umbrella chartYes (one install)Via global valuesHelm manages
Monolithic chartYesAll in oneInternal

The umbrella chart gives a single helm install fhirhub ./helm/fhirhub while keeping each service's templates isolated in its own sub-chart. Adding a new service means adding a new sub-chart directory and a dependency line -- not refactoring a monolith.

The Library Chart

The library chart (fhirhub-lib) provides reusable define templates that sub-charts call with their specific values. This avoids duplicating Deployment, Service, and Ingress YAML across 5 sub-charts.

Why a Library Chart vs. Copying Templates?

ApproachDuplicationUpdate EffortConsistency
Copy templates per chart5x duplicationChange in 5 placesDrift risk
Shared _helpers.tplLess, but limitedModerateBetter
Library chartNoneChange onceGuaranteed

Available templates from fhirhub-lib:

  • fhirhub-lib.deployment -- Standard Deployment with configurable probes, resources, env vars, volumes
  • fhirhub-lib.service -- ClusterIP Service with configurable ports
  • fhirhub-lib.ingress -- Nginx Ingress with TLS and annotations
  • fhirhub-lib.hpa -- HorizontalPodAutoscaler with CPU/memory targets
  • fhirhub-lib.pdb -- PodDisruptionBudget with minAvailable
  • fhirhub-lib.servicemonitor -- Prometheus ServiceMonitor for metrics scraping

Each sub-chart includes the library as a dependency:

# charts/fhirhub-api/Chart.yaml
dependencies:
  - name: fhirhub-lib
    version: "0.1.0"
    repository: "file://../../fhirhub-lib"

Checkpoint: Validate Chart Dependencies

Before continuing, verify all Helm chart dependencies resolve:

helm dependency list helm/fhirhub

Expected output:

  • Should list all sub-charts (fhirhub-api, fhirhub-frontend, hapi-fhir, keycloak, and both postgresql aliases) with status ok or unpacked
helm dependency update helm/fhirhub

Expected output:

  • Should download or link all dependencies without errors. Look for Saving X charts and Deleting outdated charts
helm lint helm/fhirhub

Expected output:

  • Should complete with 0 chart(s) failed. Warnings are acceptable (e.g., missing icon), but errors indicate broken templates

If something went wrong:

  • If dependencies show missing, run helm dependency build helm/fhirhub to rebuild the charts/ lock file
  • If lint fails with template errors, check that all sub-chart Chart.yaml files reference the correct library chart version and path

PostgreSQL Reuse via Aliases

The most interesting pattern is how PostgreSQL is deployed twice -- once for HAPI FHIR clinical data, once for Keycloak auth data -- using the same chart:

# helm/fhirhub/Chart.yaml
dependencies:
  - name: postgresql
    alias: hapi-postgresql
    version: "0.1.0"
    repository: "file://charts/postgresql"

  - name: postgresql
    alias: keycloak-postgresql
    version: "0.1.0"
    repository: "file://charts/postgresql"

Each alias gets its own values:

# values.yaml
hapi-postgresql:
  database:
    name: hapi
    username: hapi
    password: changeme-hapi-db

keycloak-postgresql:
  database:
    name: keycloak
    username: keycloak
    password: changeme-keycloak-db

Same chart, two independent PostgreSQL instances. This mirrors the Docker Compose architecture from Post 10 -- separate databases for clinical and auth data.

Why Separate Databases?

PatternIsolationBackup GranularityFailure Blast Radius
Shared database, separate schemasLowWhole DBAuth failure breaks clinical
Separate databases, one serverMediumPer DBServer failure breaks both
Separate database instancesHighPer instanceIndependent failures

FhirHub uses separate instances. An auth database corruption shouldn't take down clinical data, and vice versa. The overhead of running two PostgreSQL StatefulSets is minimal compared to the isolation benefit.

Standard Kubernetes Labels

Every resource uses the standard app.kubernetes.io label set:

labels:
  app.kubernetes.io/name: fhirhub-api
  app.kubernetes.io/instance: {{ .Release.Name }}
  app.kubernetes.io/version: {{ .Chart.AppVersion }}
  app.kubernetes.io/component: api
  app.kubernetes.io/part-of: fhirhub
  app.kubernetes.io/managed-by: Helm

These labels enable kubectl get pods -l app.kubernetes.io/part-of=fhirhub to find all FhirHub pods, and tools like Prometheus, ArgoCD, and Grafana use them for service discovery and grouping.

Scaling Configuration

ServiceMin PodsMax PodsCPU TargetPDB MinAvailable
FhirHub API21070%1
FhirHub Frontend21070%1
HAPI FHIR1580%--
Keycloak1 (StatefulSet)------
PostgreSQL1 (StatefulSet)------

The API and frontend scale horizontally. HAPI FHIR scales more conservatively because it's a Java application with higher memory overhead. Keycloak and PostgreSQL run as StatefulSets because they maintain state.

Why StatefulSet for Keycloak and PostgreSQL?

Workload TypeDeploymentStatefulSet
Stable network identityNoYes (pod-0, pod-1)
Persistent storagePVC per replica is awkwardVolumeClaimTemplates
Ordered startup/shutdownNo guaranteesSequential by ordinal
Scaling databasesNot suitableDesigned for this

StatefulSets provide stable pod names and ordered operations. PostgreSQL needs postgresql-0 to always be the primary. Keycloak needs predictable DNS for clustering. Deployments with PVCs would lose data affinity during rescheduling.

PodDisruptionBudgets ensure at least one pod stays running during node drains and cluster upgrades. Without a PDB, a cluster upgrade could terminate all API pods simultaneously.

Per-Environment Values

Three values files customize the deployment for each stage:

values-dev.yaml -- Single replicas, HPA disabled, ingress with no TLS, debug logging:

fhirhub-api:
  replicaCount: 1
  autoscaling:
    enabled: false
  ingress:
    enabled: true
    tls: []

values-staging.yaml -- Two replicas, HPA enabled, TLS with cert-manager, info logging:

fhirhub-api:
  replicaCount: 2
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 5
  ingress:
    tls:
      - secretName: fhirhub-staging-tls
        hosts:
          - staging.fhirhub.example.com

values-prod.yaml -- Three replicas, larger resource limits, rate limiting, warn logging:

fhirhub-api:
  replicaCount: 3
  autoscaling:
    enabled: true
    minReplicas: 3
    maxReplicas: 10
  resources:
    requests:
      cpu: 500m
      memory: 512Mi
    limits:
      cpu: "2"
      memory: 1Gi

The base values.yaml contains sane defaults. Environment files only override what's different. This keeps the diff between environments small and auditable.

Checkpoint: Template Rendering

Before continuing, verify the Helm templates render correctly for each environment:

helm template fhirhub helm/fhirhub -f helm/fhirhub/values-dev.yaml 2>&1 | head -5

Expected output:

  • Should render valid YAML (starting with --- and apiVersion:). Any Error: messages indicate template issues
helm template fhirhub helm/fhirhub -f helm/fhirhub/values-dev.yaml | grep 'replicas:' | sort | uniq -c

Expected output:

  • Dev should show all replicas: 1 since single-replica is the dev default
helm template fhirhub helm/fhirhub -f helm/fhirhub/values-prod.yaml | grep 'replicas:' | sort | uniq -c

Expected output:

  • Prod should show higher replica counts (e.g., replicas: 2 and replicas: 3), confirming the per-environment values override correctly
helm template fhirhub helm/fhirhub -f helm/fhirhub/values-dev.yaml | grep 'kind: HorizontalPodAutoscaler' | wc -l

Expected output:

  • Should be 0 for dev (HPA disabled). If it's non-zero, check that autoscaling.enabled: false is set in values-dev.yaml

If something went wrong:

  • If replica counts don't match expectations, check that the values files use the correct sub-chart key names (e.g., fhirhub-api: not api:)

Ingress Configuration

All external traffic routes through nginx-ingress:

PathServiceNotes
/fhirhub-frontendDefault route
/api/fhirhub-apiRewrite to strip /api/ prefix
auth.*keycloakSeparate subdomain

TLS is handled by cert-manager with Let's Encrypt in staging and production. Dev uses plain HTTP for simplicity.

Checkpoint: Deploy to Local Kind Cluster

Before continuing, verify the charts deploy to a local Kubernetes cluster:

make k8s-create

Expected output:

  • Creates a Kind cluster with nodes. Alternatively, run ./scripts/setup-local-k8s.sh
kubectl get nodes

Expected output:

  • Should show 3 nodes (1 control-plane, 2 workers) all with status Ready
make k8s-deploy

Expected output:

  • Helm install completes without errors
kubectl get pods -n fhirhub-dev

Expected output:

  • All pods should reach Running status with Ready condition (e.g., 1/1). This may take a few minutes as images pull and services start
kubectl get svc -n fhirhub-dev

Expected output:

  • Should list services for api, frontend, hapi-fhir, keycloak, and both postgresql instances
kubectl get ingress -n fhirhub-dev

Expected output:

  • Should show ingress rules routing traffic to the frontend and API

If something went wrong:

  • If pods are stuck in Pending: check for resource issues with kubectl describe pod -n fhirhub-dev <pod-name> and look at the Events section
  • If pods are in CrashLoopBackOff: check logs with kubectl logs -n fhirhub-dev <pod-name> -- the most common cause is a database not being ready yet
  • If images aren't found: verify make k8s-deploy loaded the local images into the Kind cluster

What's Next

In Part 17, we'll deploy these Helm charts using ArgoCD -- a GitOps operator that watches your Git repository and automatically syncs cluster state. We'll cover ApplicationSets for multi-environment deployment, sync waves for ordered rollouts, and self-healing policies.


Find the source code on GitHub Connect on LinkedIn

Related Projects

Featured

FhirHub

A healthcare data management platform built on the HL7 FHIR R4 standard, providing a comprehensive web interface for managing patient clinical data including vitals, conditions, medications, lab orders, and bulk data exports with role-based access control and full audit logging.

Next.js 16
React 19
Typescript
Tailwind CSS 4
+8