Infrastructure Overview
InteropNimbus runs as a Docker container behind Traefik reverse proxy, served by Nginx. The full stack is defined in a single docker-compose.yml alongside other services (Keycloak, Mirth Connect, HAPI FHIR).
Multi-Stage Docker Build
The Dockerfile uses a two-stage build to minimize the final image size:
# Stage 1: Build
FROM node:22-alpine AS build
WORKDIR /app
RUN corepack enable && corepack prepare pnpm@latest --activate
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
COPY . .
ARG VITE_KEYCLOAK_URL
ARG VITE_KEYCLOAK_REALM
ARG VITE_KEYCLOAK_CLIENT_ID
ARG VITE_API_URL
RUN pnpm run build
# Stage 2: Serve
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Key Decisions
- node:22-alpine — smallest Node.js image for the build stage
- pnpm with frozen lockfile — deterministic installs, no surprise updates
- Build args for Vite — Keycloak and API URLs are injected at build time, baked into the static bundle
- nginx:alpine — the final image contains only static files and Nginx, no Node.js runtime
The result is a production image under 30MB that serves static files with zero runtime dependencies.
Nginx SPA Configuration
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /assets/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
gzip on;
gzip_types text/plain text/css application/json application/javascript
text/xml application/xml text/javascript image/svg+xml;
gzip_min_length 256;
}
The critical line is try_files $uri $uri/ /index.html — this enables client-side routing by falling back to index.html for any path that doesn't match a static file. Without this, refreshing the browser on /channels/123 would return a 404.
Asset caching is set to 1 year with immutable — Vite's content-hashed filenames ensure cache busting on deploys.
Traefik Reverse Proxy
Traefik v3.4 handles TLS termination, automatic Let's Encrypt certificates, and routing:
interopnimbus:
labels:
- "traefik.enable=true"
- "traefik.http.routers.interopnimbus.rule=Host(`interopnimbus.davidle.dev`)"
- "traefik.http.routers.interopnimbus.entrypoints=websecure"
- "traefik.http.routers.interopnimbus.tls.certresolver=letsencrypt"
- "traefik.http.services.interopnimbus.loadbalancer.server.port=80"
Traefik discovers services via Docker labels — no config files to maintain. When the container starts, Traefik automatically provisions a Let's Encrypt certificate for interopnimbus.davidle.dev and routes HTTPS traffic to the container's port 80.
Network Isolation
Services are segmented into Docker networks:
- proxy — Traefik and all public-facing services
- Internal networks — database connections are isolated from the proxy network
InteropNimbus only needs the proxy network since it's a static frontend that communicates with APIs via the browser (not server-side).