Single-Node k3s Deployment
By David Le -- Part 19 of the FhirHub Series
Production Kubernetes usually means multi-node clusters with cloud load balancers, managed databases, and auto-scaling groups. But not every deployment needs that. A clinic running FhirHub for 50 users, a developer wanting a persistent staging environment, or a demo server at a conference -- these all work on a single machine.
This post covers how to deploy the entire FhirHub stack -- application services, databases, and monitoring -- on a single machine using k3s.
Why k3s?
k3s is a lightweight Kubernetes distribution from Rancher (now SUSE). It packages the entire Kubernetes control plane into a single binary under 100MB. It ships with Traefik as the default ingress controller and a local-path storage provisioner, which means a bare Linux server can run production Kubernetes workloads with one install command.
k3s vs. MicroK8s vs. kubeadm
| Feature | k3s | MicroK8s | kubeadm | |
|---|---|---|---|---|
| Binary size | ~70MB | ~200MB (snap) | Full K8s | |
| Install method | `curl \ | sh` | snap install | Multi-step |
| Default ingress | Traefik | nginx (addon) | None | |
| Default storage | local-path | hostpath (addon) | None | |
| Container runtime | containerd | containerd | configurable | |
| CRD support | Full | Full | Full | |
| Memory overhead | ~512MB | ~800MB | ~1GB+ | |
| ARM support | Yes | Yes | Yes | |
| Package manager | None | snap | apt/yum | |
| Best for | Edge, single-node, CI | Ubuntu environments | Full control |
k3s is the lightest option with the most batteries included. MicroK8s is comparable but requires snap. kubeadm gives full control but requires manual setup for ingress, storage, and networking.
Why Not Just Docker Compose?
Docker Compose works well for development (see Post 10), but k3s gives you:
- Helm charts -- the same deployment artifacts used in multi-node production
- Health checks and restart policies -- Kubernetes restarts crashed pods automatically
- Ingress routing -- Traefik handles path-based routing without nginx configuration
- Monitoring integration -- ServiceMonitor CRDs work identically to multi-node
- Upgrade path -- add nodes later without re-architecting
The single-node k3s deployment uses the same Helm charts as the multi-node deployment. The only differences are resource limits and replica counts.
Machine Requirements
| Component | CPU | Memory | Disk |
|---|---|---|---|
| k3s overhead | 0.5 cores | 512MB | 2GB |
| FhirHub services (1 replica each) | ~0.85 cores | ~2.7GB | 7GB |
| Monitoring (Prometheus + Grafana + Loki) | ~0.5 cores | ~1.5GB | 80GB |
| Headroom | ~1.15 cores | ~1.3GB | -- |
| Recommended | 4 cores | 8GB | 100GB |
Any Linux server meeting these specs works -- a dedicated server, a VM, or a cloud instance (e.g., AWS t3.xlarge, GCP e2-standard-4, DigitalOcean 8GB droplet).
What Changes from Multi-Node
The single-node values file overrides the defaults to fit everything on one machine:
Replicas and Scaling
Multi-node deployments run 2+ replicas per service with Horizontal Pod Autoscalers (HPA) and Pod Disruption Budgets (PDB). On a single node, these are unnecessary:
| Setting | Multi-Node | Single-Node | Why |
|---|---|---|---|
replicaCount | 2-3 | 1 | One node, one replica |
hpa.enabled | true | false | No horizontal scaling |
pdb.enabled | true | false | No disruption budget needed |
Ingress Controller
| Setting | Multi-Node (Kind) | Single-Node (k3s) |
|---|---|---|
className | nginx | traefik |
| Controller install | Manual (nginx-ingress) | Built-in |
| TLS termination | cert-manager | Traefik Let's Encrypt |
k3s bundles Traefik, so there's no separate ingress controller to install.
Traefik vs. nginx-ingress
| Feature | Traefik | nginx-ingress |
|---|---|---|
| k3s default | Yes | No |
| Config method | IngressRoute CRD or Ingress | Ingress + annotations |
| Auto TLS | Let's Encrypt built-in | Requires cert-manager |
| Dashboard | Built-in (port 9000) | None |
| Middleware | CRD-based | Annotation-based |
| Performance | Good | Good |
| Memory usage | ~50MB | ~100MB |
Traefik is the natural choice for k3s. It's already running and supports standard Kubernetes Ingress resources, so the existing Helm chart ingress templates work without modification.
Storage
| Setting | Multi-Node | Single-Node |
|---|---|---|
storageClass | Cluster default or Longhorn | local-path |
| HAPI PostgreSQL | 10Gi | 5Gi |
| Keycloak PostgreSQL | 5Gi | 2Gi |
local-path vs. Longhorn
| Feature | local-path | Longhorn |
|---|---|---|
| Install | Built into k3s | Helm chart |
| Replication | None | Configurable |
| Snapshots | None | Yes |
| Backup | Manual | S3/NFS |
| Performance | Native disk speed | Slight overhead |
| Best for | Single node | Multi-node HA |
local-path writes directly to the node's filesystem. No replication overhead, no additional components to install. For a single-node deployment, there's nothing to replicate to.
Resource Limits
All services have reduced limits to fit within 8GB total:
| Service | Multi-Node Limits | Single-Node Limits |
|---|---|---|
| FhirHub API | 1 CPU / 2Gi | 500m / 512Mi |
| FhirHub Frontend | 500m / 512Mi | 250m / 256Mi |
| HAPI FHIR | 1 CPU / 2Gi | 1 CPU / 1.5Gi |
| Keycloak | 1 CPU / 2Gi | 500m / 768Mi |
| HAPI PostgreSQL | 500m / 1Gi | 500m / 512Mi |
| Keycloak PostgreSQL | 250m / 512Mi | 250m / 256Mi |
| Prometheus | 1 CPU / 2Gi | 500m / 1Gi |
| Grafana | 500m / 512Mi | 250m / 256Mi |
| Loki | 500m / 1Gi | 250m / 512Mi |
Monitoring on Constrained Resources
The monitoring stack uses the same Prometheus + Grafana + Loki setup from Post 18, with adjustments:
| Setting | Multi-Node | Single-Node |
|---|---|---|
| Prometheus retention | 15 days | 7 days |
| Prometheus storage | 50Gi | 15Gi |
| Grafana storage | 10Gi | 2Gi |
| Loki retention | 168h (7 days) | 72h (3 days) |
| Loki storage | 20Gi | 10Gi |
| Scrape interval | 15s | 30s |
| AlertManager routing | Slack channels | Default webhook |
Doubling the scrape interval from 15s to 30s halves the metric volume. Seven days of retention is enough to catch trends and debug recent incidents. If you need longer retention, increase the disk size or ship metrics to a remote write endpoint.
Checkpoint: Verify Monitoring
Before continuing, verify the monitoring stack is running on the single node:
kubectl get pods -n monitoring
Expected output:
- All monitoring pods (prometheus, grafana, loki, promtail) should be
Running
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
- Open
http://localhost:3000-- should show the Grafana login page
kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090:9090
- Open
http://localhost:9090/targets-- scrape targets should showUPstate
kubectl top nodes
Expected output:
- Should show CPU and memory usage. On a properly sized single node, both should be below 80%. If either exceeds 80%, consider increasing machine resources or reducing monitoring retention settings
If something went wrong:
- If monitoring pods are in
Pending, the node may not have enough resources. Check withkubectl describe pod -n monitoring <pod-name> - If
kubectl topisn't available, metrics-server may not be installed. k3s includes it by default, but verify withkubectl get deployment -n kube-system metrics-server
Setup
One-Command Install
./scripts/setup-single-node.sh
This script:
- Checks prerequisites (
curl, sudo access) - Installs k3s via
curl -sfL https://get.k3s.io | sh - - Copies kubeconfig to
~/.kube/config - Waits for the k3s node to be Ready
- Installs Helm (if not present)
- Deploys FhirHub via Helm with single-node values
- Deploys Prometheus + Grafana monitoring stack
- Deploys Loki + Promtail log aggregation
- Waits for all pods to reach Running state
- Prints access URLs
Checkpoint: Verify k3s and FhirHub
Before continuing, verify k3s is running and FhirHub is deployed:
kubectl get nodes
Expected output:
- Should show 1 node with status
Ready
kubectl get pods -n fhirhub
Expected output:
- All 6 pods should be
RunningandReady(fhirhub-api, fhirhub-frontend, hapi-fhir, keycloak, hapi-postgresql, keycloak-postgresql)
kubectl get pvc -n fhirhub
Expected output:
- Should show
BoundPVCs for both PostgreSQL instances, using thelocal-pathstorage class
kubectl get ingress -n fhirhub
Expected output:
- Should show ingress rules with
traefikas the ingress class
After adding /etc/hosts entries (127.0.0.1 fhirhub.local auth.fhirhub.local):
curl -s -o /dev/null -w '%{http_code}' http://fhirhub.local
Expected output:
- Should return
200
curl -s -o /dev/null -w '%{http_code}' http://fhirhub.local/api/dashboard/metrics
Expected output:
- Should return
200(or401if authentication is required)
curl -s -o /dev/null -w '%{http_code}' http://auth.fhirhub.local
Expected output:
- Should return
200, confirming Keycloak is accessible
If something went wrong:
- If pods are stuck in
Pending, check for resource or storage issues:kubectl describe pod -n fhirhub <pod-name>and look at the Events section - If ingress returns connection refused, verify Traefik is running:
kubectl get pods -n kube-system -l app.kubernetes.io/name=traefik - If curl times out, check that
/etc/hostsentries are correct and the k3s node is listening on port 80
Or Step by Step with Make
make single-node-setup # Install k3s + all dependencies
make single-node-deploy # Deploy FhirHub + monitoring via Helm
make single-node-teardown # Remove everything including k3s
DNS Configuration
After setup, add entries to /etc/hosts:
127.0.0.1 fhirhub.local auth.fhirhub.local
Or configure real DNS records pointing to your server's IP.
Accessing Services
| Service | URL |
|---|---|
| Frontend | http://fhirhub.local |
| API | http://fhirhub.local/api/ |
| Keycloak | http://auth.fhirhub.local |
| Grafana | kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80 |
| Prometheus | kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090:9090 |
Grafana and Prometheus are accessed via port-forward to avoid exposing monitoring externally by default. To expose them via Traefik, add ingress entries to the monitoring values.
Backup Strategy
Single-node means no replication. If the disk fails, data is lost. Regular backups are essential.
PostgreSQL Backup
Schedule a cron job to dump both databases:
# /etc/cron.d/fhirhub-backup
0 2 * * * root kubectl exec -n fhirhub deploy/fhirhub-hapi-postgresql -- \
pg_dump -U hapi hapi | gzip > /backups/hapi-$(date +\%Y\%m\%d).sql.gz
0 2 * * * root kubectl exec -n fhirhub deploy/fhirhub-keycloak-postgresql -- \
pg_dump -U keycloak keycloak | gzip > /backups/keycloak-$(date +\%Y\%m\%d).sql.gz
Persistent Volume Backup
For the monitoring PVCs (local-path volumes live under /var/lib/rancher/k3s/storage/):
# Snapshot the entire storage directory
tar czf /backups/k3s-storage-$(date +%Y%m%d).tar.gz /var/lib/rancher/k3s/storage/
Checkpoint: Test Backup
Before continuing, verify that database backups work:
kubectl exec -n fhirhub deploy/fhirhub-hapi-postgresql -- pg_dump -U hapi hapi | head -5
Expected output:
- Should output SQL statements (e.g.,
-- PostgreSQL database dump,SET statement_timeout), confirmingpg_dumpworks and the database exists
kubectl exec -n fhirhub deploy/fhirhub-keycloak-postgresql -- pg_dump -U keycloak keycloak | head -5
Expected output:
- Same as above -- SQL output confirming the Keycloak database is accessible
ls /var/lib/rancher/k3s/storage/
Expected output:
- Should show PVC directories for the persistent volumes. These are the
local-pathvolumes that need to be included in filesystem backups
If something went wrong:
- If
pg_dumpfails with "connection refused", check that PostgreSQL is running:kubectl get pods -n fhirhub -l app.kubernetes.io/name=postgresql - If
pg_dumpfails with "authentication failed", verify the database username matches what's configured in the Helm values - If the storage directory is empty, check k3s is using the default
local-pathprovisioner:kubectl get storageclass
Off-Site
Copy backups to an external location (S3, NFS, or another server) to protect against hardware failure.
When to Scale to Multi-Node
Single-node works well for:
- Small clinics (< 100 concurrent users)
- Development and staging environments
- Demos and proof-of-concept deployments
- CI/CD test environments
Consider multi-node when:
- You need high availability (zero-downtime deployments)
- HAPI FHIR latency increases under load (scale replicas)
- Disk usage exceeds available space (add storage nodes)
- Compliance requires redundancy (HIPAA production environments)
Scaling from single to multi-node requires:
- Install k3s agent on additional machines (
curl -sfL https://get.k3s.io | K3S_URL=... K3S_TOKEN=... sh -) - Switch from
local-pathto Longhorn for replicated storage - Re-enable HPA and PDB in values
- Increase replica counts
The Helm charts and monitoring configuration stay the same.
Teardown
To remove everything from the machine:
./scripts/teardown-single-node.sh
This uninstalls all Helm releases, deletes namespaces, and runs the k3s uninstall script.
What's Next
This concludes the FhirHub infrastructure series. Nineteen posts covering application development, authentication, clinical features, containerization, CI/CD, Kubernetes, GitOps, monitoring, and single-machine deployment. The entire codebase is open source -- clone it, run make up for development or ./scripts/setup-single-node.sh for a production-ready single-machine deployment.