Kubernetes Packaging with Helm Charts
By David Le -- Part 16 of the FhirHub Series
Docker Compose works for local development and single-server deployments, but Kubernetes is where healthcare applications need to run in production. Kubernetes provides auto-scaling, self-healing, rolling updates, and the operational guarantees that clinical systems require.
This post covers how I packaged FhirHub for Kubernetes using Helm charts -- an umbrella chart with sub-charts for each service, a library chart that eliminates template duplication, and per-environment values files that cleanly separate configuration concerns.
Why Helm?
Helm vs. Kustomize vs. Raw Manifests vs. CDK8s
| Tool | Templating | Packaging | Multi-Env | Ecosystem |
|---|---|---|---|---|
| Raw YAML | None | None | Copy-paste | N/A |
| Kustomize | Patches/overlays | Builtin to kubectl | Good | Growing |
| Helm | Go templates | Charts + repos | Values files | Mature |
| CDK8s | Real programming languages | Synthesized YAML | Code-level | Newer |
Helm won for three reasons:
- Umbrella charts -- A single
helm installdeploys the entire stack. Sub-charts for each service keep things modular. - Values files per environment --
values-dev.yaml,values-staging.yaml,values-prod.yamlcleanly separate environment concerns without patching. - Library charts -- Shared templates avoid copy-pasting Deployment, Service, and Ingress boilerplate across five sub-charts.
Kustomize is simpler for basic overlays, but FhirHub has 6 services with different configurations. Values-based templating is more natural here than patch-based overlays. Raw YAML requires duplicating every manifest per environment. CDK8s is powerful but adds a compilation step and a new language to learn -- overkill when Go templates suffice.
Chart Structure
helm/
fhirhub-lib/ # Library chart (shared templates)
templates/
_deployment.tpl # Reusable Deployment template
_service.tpl # Reusable Service template
_ingress.tpl # Reusable Ingress template
_hpa.tpl # HorizontalPodAutoscaler
_pdb.tpl # PodDisruptionBudget
_servicemonitor.tpl # Prometheus ServiceMonitor
_helpers.tpl # Labels, names, selectors
fhirhub/ # Umbrella chart
Chart.yaml # Dependencies on all sub-charts
values.yaml # Default values
values-dev.yaml # Dev overrides
values-staging.yaml # Staging overrides
values-prod.yaml # Production overrides
charts/
fhirhub-api/ # 12 templates
fhirhub-frontend/ # 11 templates
hapi-fhir/ # 8 templates
keycloak/ # 8 templates (StatefulSet)
postgresql/ # 7 templates (StatefulSet, reused twice)
Why Umbrella Chart vs. Flat Charts?
| Pattern | Single Install | Shared Config | Dependency Order |
|---|---|---|---|
| Flat charts (one per service) | No (5 installs) | Manual | Manual |
| Umbrella chart | Yes (one install) | Via global values | Helm manages |
| Monolithic chart | Yes | All in one | Internal |
The umbrella chart gives a single helm install fhirhub ./helm/fhirhub while keeping each service's templates isolated in its own sub-chart. Adding a new service means adding a new sub-chart directory and a dependency line -- not refactoring a monolith.
The Library Chart
The library chart (fhirhub-lib) provides reusable define templates that sub-charts call with their specific values. This avoids duplicating Deployment, Service, and Ingress YAML across 5 sub-charts.
Why a Library Chart vs. Copying Templates?
| Approach | Duplication | Update Effort | Consistency |
|---|---|---|---|
| Copy templates per chart | 5x duplication | Change in 5 places | Drift risk |
Shared _helpers.tpl | Less, but limited | Moderate | Better |
| Library chart | None | Change once | Guaranteed |
Available templates from fhirhub-lib:
fhirhub-lib.deployment-- Standard Deployment with configurable probes, resources, env vars, volumesfhirhub-lib.service-- ClusterIP Service with configurable portsfhirhub-lib.ingress-- Nginx Ingress with TLS and annotationsfhirhub-lib.hpa-- HorizontalPodAutoscaler with CPU/memory targetsfhirhub-lib.pdb-- PodDisruptionBudget with minAvailablefhirhub-lib.servicemonitor-- Prometheus ServiceMonitor for metrics scraping
Each sub-chart includes the library as a dependency:
# charts/fhirhub-api/Chart.yaml
dependencies:
- name: fhirhub-lib
version: "0.1.0"
repository: "file://../../fhirhub-lib"
Checkpoint: Validate Chart Dependencies
Before continuing, verify all Helm chart dependencies resolve:
helm dependency list helm/fhirhub
Expected output:
- Should list all sub-charts (fhirhub-api, fhirhub-frontend, hapi-fhir, keycloak, and both postgresql aliases) with status
okorunpacked
helm dependency update helm/fhirhub
Expected output:
- Should download or link all dependencies without errors. Look for
Saving X chartsandDeleting outdated charts
helm lint helm/fhirhub
Expected output:
- Should complete with
0 chart(s) failed. Warnings are acceptable (e.g., missing icon), but errors indicate broken templates
If something went wrong:
- If dependencies show
missing, runhelm dependency build helm/fhirhubto rebuild thecharts/lock file - If lint fails with template errors, check that all sub-chart
Chart.yamlfiles reference the correct library chart version and path
PostgreSQL Reuse via Aliases
The most interesting pattern is how PostgreSQL is deployed twice -- once for HAPI FHIR clinical data, once for Keycloak auth data -- using the same chart:
# helm/fhirhub/Chart.yaml
dependencies:
- name: postgresql
alias: hapi-postgresql
version: "0.1.0"
repository: "file://charts/postgresql"
- name: postgresql
alias: keycloak-postgresql
version: "0.1.0"
repository: "file://charts/postgresql"
Each alias gets its own values:
# values.yaml
hapi-postgresql:
database:
name: hapi
username: hapi
password: changeme-hapi-db
keycloak-postgresql:
database:
name: keycloak
username: keycloak
password: changeme-keycloak-db
Same chart, two independent PostgreSQL instances. This mirrors the Docker Compose architecture from Post 10 -- separate databases for clinical and auth data.
Why Separate Databases?
| Pattern | Isolation | Backup Granularity | Failure Blast Radius |
|---|---|---|---|
| Shared database, separate schemas | Low | Whole DB | Auth failure breaks clinical |
| Separate databases, one server | Medium | Per DB | Server failure breaks both |
| Separate database instances | High | Per instance | Independent failures |
FhirHub uses separate instances. An auth database corruption shouldn't take down clinical data, and vice versa. The overhead of running two PostgreSQL StatefulSets is minimal compared to the isolation benefit.
Standard Kubernetes Labels
Every resource uses the standard app.kubernetes.io label set:
labels:
app.kubernetes.io/name: fhirhub-api
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: api
app.kubernetes.io/part-of: fhirhub
app.kubernetes.io/managed-by: Helm
These labels enable kubectl get pods -l app.kubernetes.io/part-of=fhirhub to find all FhirHub pods, and tools like Prometheus, ArgoCD, and Grafana use them for service discovery and grouping.
Scaling Configuration
| Service | Min Pods | Max Pods | CPU Target | PDB MinAvailable |
|---|---|---|---|---|
| FhirHub API | 2 | 10 | 70% | 1 |
| FhirHub Frontend | 2 | 10 | 70% | 1 |
| HAPI FHIR | 1 | 5 | 80% | -- |
| Keycloak | 1 (StatefulSet) | -- | -- | -- |
| PostgreSQL | 1 (StatefulSet) | -- | -- | -- |
The API and frontend scale horizontally. HAPI FHIR scales more conservatively because it's a Java application with higher memory overhead. Keycloak and PostgreSQL run as StatefulSets because they maintain state.
Why StatefulSet for Keycloak and PostgreSQL?
| Workload Type | Deployment | StatefulSet |
|---|---|---|
| Stable network identity | No | Yes (pod-0, pod-1) |
| Persistent storage | PVC per replica is awkward | VolumeClaimTemplates |
| Ordered startup/shutdown | No guarantees | Sequential by ordinal |
| Scaling databases | Not suitable | Designed for this |
StatefulSets provide stable pod names and ordered operations. PostgreSQL needs postgresql-0 to always be the primary. Keycloak needs predictable DNS for clustering. Deployments with PVCs would lose data affinity during rescheduling.
PodDisruptionBudgets ensure at least one pod stays running during node drains and cluster upgrades. Without a PDB, a cluster upgrade could terminate all API pods simultaneously.
Per-Environment Values
Three values files customize the deployment for each stage:
values-dev.yaml -- Single replicas, HPA disabled, ingress with no TLS, debug logging:
fhirhub-api:
replicaCount: 1
autoscaling:
enabled: false
ingress:
enabled: true
tls: []
values-staging.yaml -- Two replicas, HPA enabled, TLS with cert-manager, info logging:
fhirhub-api:
replicaCount: 2
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 5
ingress:
tls:
- secretName: fhirhub-staging-tls
hosts:
- staging.fhirhub.example.com
values-prod.yaml -- Three replicas, larger resource limits, rate limiting, warn logging:
fhirhub-api:
replicaCount: 3
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: "2"
memory: 1Gi
The base values.yaml contains sane defaults. Environment files only override what's different. This keeps the diff between environments small and auditable.
Checkpoint: Template Rendering
Before continuing, verify the Helm templates render correctly for each environment:
helm template fhirhub helm/fhirhub -f helm/fhirhub/values-dev.yaml 2>&1 | head -5
Expected output:
- Should render valid YAML (starting with
---andapiVersion:). AnyError:messages indicate template issues
helm template fhirhub helm/fhirhub -f helm/fhirhub/values-dev.yaml | grep 'replicas:' | sort | uniq -c
Expected output:
- Dev should show all
replicas: 1since single-replica is the dev default
helm template fhirhub helm/fhirhub -f helm/fhirhub/values-prod.yaml | grep 'replicas:' | sort | uniq -c
Expected output:
- Prod should show higher replica counts (e.g.,
replicas: 2andreplicas: 3), confirming the per-environment values override correctly
helm template fhirhub helm/fhirhub -f helm/fhirhub/values-dev.yaml | grep 'kind: HorizontalPodAutoscaler' | wc -l
Expected output:
- Should be
0for dev (HPA disabled). If it's non-zero, check thatautoscaling.enabled: falseis set invalues-dev.yaml
If something went wrong:
- If replica counts don't match expectations, check that the values files use the correct sub-chart key names (e.g.,
fhirhub-api:notapi:)
Ingress Configuration
All external traffic routes through nginx-ingress:
| Path | Service | Notes |
|---|---|---|
/ | fhirhub-frontend | Default route |
/api/ | fhirhub-api | Rewrite to strip /api/ prefix |
auth.* | keycloak | Separate subdomain |
TLS is handled by cert-manager with Let's Encrypt in staging and production. Dev uses plain HTTP for simplicity.
Checkpoint: Deploy to Local Kind Cluster
Before continuing, verify the charts deploy to a local Kubernetes cluster:
make k8s-create
Expected output:
- Creates a Kind cluster with nodes. Alternatively, run
./scripts/setup-local-k8s.sh
kubectl get nodes
Expected output:
- Should show 3 nodes (1 control-plane, 2 workers) all with status
Ready
make k8s-deploy
Expected output:
- Helm install completes without errors
kubectl get pods -n fhirhub-dev
Expected output:
- All pods should reach
Runningstatus withReadycondition (e.g.,1/1). This may take a few minutes as images pull and services start
kubectl get svc -n fhirhub-dev
Expected output:
- Should list services for api, frontend, hapi-fhir, keycloak, and both postgresql instances
kubectl get ingress -n fhirhub-dev
Expected output:
- Should show ingress rules routing traffic to the frontend and API
If something went wrong:
- If pods are stuck in
Pending: check for resource issues withkubectl describe pod -n fhirhub-dev <pod-name>and look at the Events section - If pods are in
CrashLoopBackOff: check logs withkubectl logs -n fhirhub-dev <pod-name>-- the most common cause is a database not being ready yet - If images aren't found: verify
make k8s-deployloaded the local images into the Kind cluster
What's Next
In Part 17, we'll deploy these Helm charts using ArgoCD -- a GitOps operator that watches your Git repository and automatically syncs cluster state. We'll cover ApplicationSets for multi-environment deployment, sync waves for ordered rollouts, and self-healing policies.