DL
Back to Blog
TechFebruary 3, 2026·5 min read

Single-Node k3s Deployment

How to deploy the entire FhirHub stack on a single machine using k3s, a lightweight Kubernetes distribution ideal for small clinics, staging environments, and demo servers.

D

David Le

Single-Node k3s Deployment

By David Le -- Part 19 of the FhirHub Series

Production Kubernetes usually means multi-node clusters with cloud load balancers, managed databases, and auto-scaling groups. But not every deployment needs that. A clinic running FhirHub for 50 users, a developer wanting a persistent staging environment, or a demo server at a conference -- these all work on a single machine.

This post covers how to deploy the entire FhirHub stack -- application services, databases, and monitoring -- on a single machine using k3s.

Why k3s?

k3s is a lightweight Kubernetes distribution from Rancher (now SUSE). It packages the entire Kubernetes control plane into a single binary under 100MB. It ships with Traefik as the default ingress controller and a local-path storage provisioner, which means a bare Linux server can run production Kubernetes workloads with one install command.

k3s vs. MicroK8s vs. kubeadm

Featurek3sMicroK8skubeadm
Binary size~70MB~200MB (snap)Full K8s
Install method`curl \sh`snap installMulti-step
Default ingressTraefiknginx (addon)None
Default storagelocal-pathhostpath (addon)None
Container runtimecontainerdcontainerdconfigurable
CRD supportFullFullFull
Memory overhead~512MB~800MB~1GB+
ARM supportYesYesYes
Package managerNonesnapapt/yum
Best forEdge, single-node, CIUbuntu environmentsFull control

k3s is the lightest option with the most batteries included. MicroK8s is comparable but requires snap. kubeadm gives full control but requires manual setup for ingress, storage, and networking.

Why Not Just Docker Compose?

Docker Compose works well for development (see Post 10), but k3s gives you:

  • Helm charts -- the same deployment artifacts used in multi-node production
  • Health checks and restart policies -- Kubernetes restarts crashed pods automatically
  • Ingress routing -- Traefik handles path-based routing without nginx configuration
  • Monitoring integration -- ServiceMonitor CRDs work identically to multi-node
  • Upgrade path -- add nodes later without re-architecting

The single-node k3s deployment uses the same Helm charts as the multi-node deployment. The only differences are resource limits and replica counts.

Machine Requirements

ComponentCPUMemoryDisk
k3s overhead0.5 cores512MB2GB
FhirHub services (1 replica each)~0.85 cores~2.7GB7GB
Monitoring (Prometheus + Grafana + Loki)~0.5 cores~1.5GB80GB
Headroom~1.15 cores~1.3GB--
Recommended4 cores8GB100GB

Any Linux server meeting these specs works -- a dedicated server, a VM, or a cloud instance (e.g., AWS t3.xlarge, GCP e2-standard-4, DigitalOcean 8GB droplet).

What Changes from Multi-Node

The single-node values file overrides the defaults to fit everything on one machine:

Replicas and Scaling

Multi-node deployments run 2+ replicas per service with Horizontal Pod Autoscalers (HPA) and Pod Disruption Budgets (PDB). On a single node, these are unnecessary:

SettingMulti-NodeSingle-NodeWhy
replicaCount2-31One node, one replica
hpa.enabledtruefalseNo horizontal scaling
pdb.enabledtruefalseNo disruption budget needed

Ingress Controller

SettingMulti-Node (Kind)Single-Node (k3s)
classNamenginxtraefik
Controller installManual (nginx-ingress)Built-in
TLS terminationcert-managerTraefik Let's Encrypt

k3s bundles Traefik, so there's no separate ingress controller to install.

Traefik vs. nginx-ingress

FeatureTraefiknginx-ingress
k3s defaultYesNo
Config methodIngressRoute CRD or IngressIngress + annotations
Auto TLSLet's Encrypt built-inRequires cert-manager
DashboardBuilt-in (port 9000)None
MiddlewareCRD-basedAnnotation-based
PerformanceGoodGood
Memory usage~50MB~100MB

Traefik is the natural choice for k3s. It's already running and supports standard Kubernetes Ingress resources, so the existing Helm chart ingress templates work without modification.

Storage

SettingMulti-NodeSingle-Node
storageClassCluster default or Longhornlocal-path
HAPI PostgreSQL10Gi5Gi
Keycloak PostgreSQL5Gi2Gi

local-path vs. Longhorn

Featurelocal-pathLonghorn
InstallBuilt into k3sHelm chart
ReplicationNoneConfigurable
SnapshotsNoneYes
BackupManualS3/NFS
PerformanceNative disk speedSlight overhead
Best forSingle nodeMulti-node HA

local-path writes directly to the node's filesystem. No replication overhead, no additional components to install. For a single-node deployment, there's nothing to replicate to.

Resource Limits

All services have reduced limits to fit within 8GB total:

ServiceMulti-Node LimitsSingle-Node Limits
FhirHub API1 CPU / 2Gi500m / 512Mi
FhirHub Frontend500m / 512Mi250m / 256Mi
HAPI FHIR1 CPU / 2Gi1 CPU / 1.5Gi
Keycloak1 CPU / 2Gi500m / 768Mi
HAPI PostgreSQL500m / 1Gi500m / 512Mi
Keycloak PostgreSQL250m / 512Mi250m / 256Mi
Prometheus1 CPU / 2Gi500m / 1Gi
Grafana500m / 512Mi250m / 256Mi
Loki500m / 1Gi250m / 512Mi

Monitoring on Constrained Resources

The monitoring stack uses the same Prometheus + Grafana + Loki setup from Post 18, with adjustments:

SettingMulti-NodeSingle-Node
Prometheus retention15 days7 days
Prometheus storage50Gi15Gi
Grafana storage10Gi2Gi
Loki retention168h (7 days)72h (3 days)
Loki storage20Gi10Gi
Scrape interval15s30s
AlertManager routingSlack channelsDefault webhook

Doubling the scrape interval from 15s to 30s halves the metric volume. Seven days of retention is enough to catch trends and debug recent incidents. If you need longer retention, increase the disk size or ship metrics to a remote write endpoint.

Checkpoint: Verify Monitoring

Before continuing, verify the monitoring stack is running on the single node:

kubectl get pods -n monitoring

Expected output:

  • All monitoring pods (prometheus, grafana, loki, promtail) should be Running
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
  • Open http://localhost:3000 -- should show the Grafana login page
kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090:9090
  • Open http://localhost:9090/targets -- scrape targets should show UP state
kubectl top nodes

Expected output:

  • Should show CPU and memory usage. On a properly sized single node, both should be below 80%. If either exceeds 80%, consider increasing machine resources or reducing monitoring retention settings

If something went wrong:

  • If monitoring pods are in Pending, the node may not have enough resources. Check with kubectl describe pod -n monitoring <pod-name>
  • If kubectl top isn't available, metrics-server may not be installed. k3s includes it by default, but verify with kubectl get deployment -n kube-system metrics-server

Setup

One-Command Install

./scripts/setup-single-node.sh

This script:

  1. Checks prerequisites (curl, sudo access)
  2. Installs k3s via curl -sfL https://get.k3s.io | sh -
  3. Copies kubeconfig to ~/.kube/config
  4. Waits for the k3s node to be Ready
  5. Installs Helm (if not present)
  6. Deploys FhirHub via Helm with single-node values
  7. Deploys Prometheus + Grafana monitoring stack
  8. Deploys Loki + Promtail log aggregation
  9. Waits for all pods to reach Running state
  10. Prints access URLs

Checkpoint: Verify k3s and FhirHub

Before continuing, verify k3s is running and FhirHub is deployed:

kubectl get nodes

Expected output:

  • Should show 1 node with status Ready
kubectl get pods -n fhirhub

Expected output:

  • All 6 pods should be Running and Ready (fhirhub-api, fhirhub-frontend, hapi-fhir, keycloak, hapi-postgresql, keycloak-postgresql)
kubectl get pvc -n fhirhub

Expected output:

  • Should show Bound PVCs for both PostgreSQL instances, using the local-path storage class
kubectl get ingress -n fhirhub

Expected output:

  • Should show ingress rules with traefik as the ingress class

After adding /etc/hosts entries (127.0.0.1 fhirhub.local auth.fhirhub.local):

curl -s -o /dev/null -w '%{http_code}' http://fhirhub.local

Expected output:

  • Should return 200
curl -s -o /dev/null -w '%{http_code}' http://fhirhub.local/api/dashboard/metrics

Expected output:

  • Should return 200 (or 401 if authentication is required)
curl -s -o /dev/null -w '%{http_code}' http://auth.fhirhub.local

Expected output:

  • Should return 200, confirming Keycloak is accessible

If something went wrong:

  • If pods are stuck in Pending, check for resource or storage issues: kubectl describe pod -n fhirhub <pod-name> and look at the Events section
  • If ingress returns connection refused, verify Traefik is running: kubectl get pods -n kube-system -l app.kubernetes.io/name=traefik
  • If curl times out, check that /etc/hosts entries are correct and the k3s node is listening on port 80

Or Step by Step with Make

make single-node-setup     # Install k3s + all dependencies
make single-node-deploy    # Deploy FhirHub + monitoring via Helm
make single-node-teardown  # Remove everything including k3s

DNS Configuration

After setup, add entries to /etc/hosts:

127.0.0.1 fhirhub.local auth.fhirhub.local

Or configure real DNS records pointing to your server's IP.

Accessing Services

ServiceURL
Frontendhttp://fhirhub.local
APIhttp://fhirhub.local/api/
Keycloakhttp://auth.fhirhub.local
Grafanakubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
Prometheuskubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090:9090

Grafana and Prometheus are accessed via port-forward to avoid exposing monitoring externally by default. To expose them via Traefik, add ingress entries to the monitoring values.

Backup Strategy

Single-node means no replication. If the disk fails, data is lost. Regular backups are essential.

PostgreSQL Backup

Schedule a cron job to dump both databases:

# /etc/cron.d/fhirhub-backup
0 2 * * * root kubectl exec -n fhirhub deploy/fhirhub-hapi-postgresql -- \
  pg_dump -U hapi hapi | gzip > /backups/hapi-$(date +\%Y\%m\%d).sql.gz

0 2 * * * root kubectl exec -n fhirhub deploy/fhirhub-keycloak-postgresql -- \
  pg_dump -U keycloak keycloak | gzip > /backups/keycloak-$(date +\%Y\%m\%d).sql.gz

Persistent Volume Backup

For the monitoring PVCs (local-path volumes live under /var/lib/rancher/k3s/storage/):

# Snapshot the entire storage directory
tar czf /backups/k3s-storage-$(date +%Y%m%d).tar.gz /var/lib/rancher/k3s/storage/

Checkpoint: Test Backup

Before continuing, verify that database backups work:

kubectl exec -n fhirhub deploy/fhirhub-hapi-postgresql -- pg_dump -U hapi hapi | head -5

Expected output:

  • Should output SQL statements (e.g., -- PostgreSQL database dump, SET statement_timeout), confirming pg_dump works and the database exists
kubectl exec -n fhirhub deploy/fhirhub-keycloak-postgresql -- pg_dump -U keycloak keycloak | head -5

Expected output:

  • Same as above -- SQL output confirming the Keycloak database is accessible
ls /var/lib/rancher/k3s/storage/

Expected output:

  • Should show PVC directories for the persistent volumes. These are the local-path volumes that need to be included in filesystem backups

If something went wrong:

  • If pg_dump fails with "connection refused", check that PostgreSQL is running: kubectl get pods -n fhirhub -l app.kubernetes.io/name=postgresql
  • If pg_dump fails with "authentication failed", verify the database username matches what's configured in the Helm values
  • If the storage directory is empty, check k3s is using the default local-path provisioner: kubectl get storageclass

Off-Site

Copy backups to an external location (S3, NFS, or another server) to protect against hardware failure.

When to Scale to Multi-Node

Single-node works well for:

  • Small clinics (< 100 concurrent users)
  • Development and staging environments
  • Demos and proof-of-concept deployments
  • CI/CD test environments

Consider multi-node when:

  • You need high availability (zero-downtime deployments)
  • HAPI FHIR latency increases under load (scale replicas)
  • Disk usage exceeds available space (add storage nodes)
  • Compliance requires redundancy (HIPAA production environments)

Scaling from single to multi-node requires:

  1. Install k3s agent on additional machines (curl -sfL https://get.k3s.io | K3S_URL=... K3S_TOKEN=... sh -)
  2. Switch from local-path to Longhorn for replicated storage
  3. Re-enable HPA and PDB in values
  4. Increase replica counts

The Helm charts and monitoring configuration stay the same.

Teardown

To remove everything from the machine:

./scripts/teardown-single-node.sh

This uninstalls all Helm releases, deletes namespaces, and runs the k3s uninstall script.

What's Next

This concludes the FhirHub infrastructure series. Nineteen posts covering application development, authentication, clinical features, containerization, CI/CD, Kubernetes, GitOps, monitoring, and single-machine deployment. The entire codebase is open source -- clone it, run make up for development or ./scripts/setup-single-node.sh for a production-ready single-machine deployment.


Find the source code on GitHub Connect on LinkedIn

Related Projects

Featured

FhirHub

A healthcare data management platform built on the HL7 FHIR R4 standard, providing a comprehensive web interface for managing patient clinical data including vitals, conditions, medications, lab orders, and bulk data exports with role-based access control and full audit logging.

Next.js 16
React 19
Typescript
Tailwind CSS 4
+8