CI/CD Pipelines with GitHub Actions
By David Le -- Part 15 of the FhirHub Series
Docker images are only useful if they're built, tested, and deployed automatically. Manual builds introduce human error. Manual deployments skip tests. In healthcare, both are unacceptable.
This post covers the GitHub Actions CI/CD pipelines I built for FhirHub -- reusable workflows that eliminate duplication, a CI pipeline that validates every pull request, a release pipeline that pushes images to Docker Hub, and security scanning that catches vulnerabilities before they reach production.
Why GitHub Actions?
GitHub Actions vs. Jenkins vs. GitLab CI vs. CircleCI
| Platform | Self-Hosted | YAML Config | Marketplace | Free Tier |
|---|---|---|---|---|
| GitHub Actions | Optional | Yes | 16,000+ actions | 2,000 min/month |
| Jenkins | Required | Jenkinsfile | Plugins (fragmented) | Free (self-host) |
| GitLab CI | Optional | Yes | Templates | 400 min/month |
| CircleCI | No | Yes | Orbs | 6,000 min/month |
GitHub Actions won because FhirHub's source is already on GitHub, the marketplace has first-party actions for Docker, .NET, and Node.js, and the reusable workflow feature avoids duplicating pipeline logic.
Jenkins is the most flexible option, but it requires maintaining a build server -- patching, securing, scaling. For a project hosted on GitHub, that's overhead without benefit. GitLab CI is strong but would mean moving the repository or mirroring it. CircleCI offers more free minutes but lacks the deep GitHub integration (status checks, code scanning, SARIF uploads).
Reusable Workflows
Three reusable workflows eliminate duplication across pipelines. Each uses workflow_call so they can be invoked from other workflows like functions.
reusable-docker-build.yml
Builds, tags, pushes, and scans a Docker image:
on:
workflow_call:
inputs:
context:
required: true
type: string
dockerfile:
required: true
type: string
image-name:
required: true
type: string
secrets:
DOCKERHUB_USERNAME:
required: true
DOCKERHUB_TOKEN:
required: true
Called from release.yml twice -- once for the API, once for the frontend. Same build logic, different inputs.
The workflow uses:
- Docker Buildx for multi-platform support and advanced caching
docker/metadata-actionfor automated tag generation (SHA, semver, latest)docker/build-push-actionwith GitHub Actions cache (type=gha) so layers persist across runs- Trivy for CVE scanning after push
Why Reusable Workflows vs. Composite Actions vs. Copy-Paste?
| Approach | Secrets Access | Full Job Control | Maintainability |
|---|---|---|---|
| Copy-paste YAML | Yes | Yes | Poor -- changes in N places |
| Composite actions | No (workarounds) | No (single step) | Good |
| Reusable workflows | Yes (native) | Yes (full jobs) | Best |
Reusable workflows are the only option that supports secrets natively and gives full job control (multiple steps, services, matrices). Composite actions can't access secrets directly and run as a single step, which limits what you can do.
Checkpoint: Verify Workflow Files Exist
Before continuing, verify the workflow files are in place:
ls .github/workflows/
Expected output:
- Should show
ci.yml,release.yml,security-scan.yml,reusable-docker-build.yml,reusable-dotnet.yml,reusable-node.yml
grep 'workflow_call' .github/workflows/reusable-docker-build.yml
Expected output:
- Should output a line containing
workflow_call, confirming the workflow is callable from other workflows
If something went wrong:
- If files are missing, check that you're on the correct branch and the
.github/workflows/directory exists at the repository root - If
workflow_callisn't found, the workflow won't be reusable -- it needson: workflow_call:in its trigger section
reusable-dotnet.yml
Restores, builds, tests, and uploads coverage for .NET projects:
- Uses NuGet caching via
actions/setup-dotnetto avoid re-downloading packages - Builds in
Releaseconfiguration to catch Release-only issues - Uploads test coverage as a build artifact for downstream analysis
reusable-node.yml
Installs, lints, typechecks, tests, and builds the frontend:
- Uses npm caching via
actions/setup-node - Runs
npm run lint,npx tsc --noEmit, andnpm run test:coverageas separate steps - Uploads the build artifact so downstream Docker builds can verify it succeeded
CI Pipeline (Pull Requests)
Every pull request triggers four parallel jobs:
PR opened/updated
├── api-build-test (reusable-dotnet)
├── frontend-build-test (reusable-node)
├── docker-build-api (build only, no push)
└── docker-build-frontend (build only, no push)
Docker builds run after tests pass. They verify the Dockerfile works without pushing anything -- catching build failures before merge.
cancel-in-progress: true kills running CI jobs when a new push arrives on the same branch. No wasted compute on outdated commits.
Why Build Docker Images in CI Without Pushing?
A Dockerfile can break independently from the application code. A dependency version change, a missing COPY path, or a new build arg can all cause failures. Building in CI catches these problems before the merge. Pushing would waste registry space on images from unmerged code.
Checkpoint: Test CI Locally
Before continuing, verify the same checks the CI pipeline runs pass on your machine:
cd FhirHubServer && dotnet test --verbosity normal
Expected output:
- All tests pass. Look for
Passed!at the end
cd frontend && npm run lint && npx tsc --noEmit && npm run test:run
Expected output:
- Lint passes with no errors, TypeScript compiles with no type errors, and all tests pass
docker build -t fhirhub-api:ci-test -f FhirHubServer/src/FhirHubServer.Api/Dockerfile FhirHubServer/
docker build -t fhirhub-frontend:ci-test -f frontend/Dockerfile frontend/
Expected output:
- Both images build successfully. These are the same Docker builds the CI pipeline runs on every PR
If something went wrong:
- If
dotnet testfails, check that you have the correct .NET SDK version (dotnet --version) - If
npm run lintfails, runnpm run lint -- --fixto auto-fix formatting issues - If Docker builds fail locally but code tests pass, the issue is likely in the Dockerfile (missing files, wrong paths)
Release Pipeline (Main Branch)
Merging to main triggers the release workflow:
Push to main
├── test-api ──> build-push-api ──┐
├── test-frontend ──> build-push-frontend ──┤
└────────────────────────────────────────> update-manifests
The update-manifests job uses yq to write the new image tag (sha-<commit>) into helm/fhirhub/values.yaml and commits it back. This is the bridge between CI/CD and GitOps -- ArgoCD watches that file for changes.
Why Update Manifests in the Pipeline?
| Deployment Trigger | GitOps Compatible | Auditable | Rollback |
|---|---|---|---|
kubectl apply in pipeline | No | Pipeline logs only | Re-run old pipeline |
| Webhook to ArgoCD | Partial | ArgoCD logs | ArgoCD revert |
| Update values.yaml in Git | Yes | Full git history | git revert |
Writing the image tag back to Git means every deployment is a commit. You can git log to see what's deployed, git revert to roll back, and git blame to see who triggered it. ArgoCD picks up the change and syncs the cluster.
Image Tagging Strategy
| Trigger | Tags Applied |
|---|---|
Push to main | latest, sha-abc1234 |
Tag v1.2.3 | 1.2.3, 1.2, 1, latest |
The SHA tag is immutable -- it always points to the exact code that built it. The semver tags follow Docker conventions for version pinning. Users who want stability pin to a major version (1). Users who want the latest pin to latest or track main.
Why SHA Tags Over Build Numbers?
| Tag Strategy | Immutable | Traceable to Code | Unique |
|---|---|---|---|
Build number (build-42) | Yes | No (need lookup) | Yes |
Timestamp (20240115) | Yes | No | Usually |
Git SHA (sha-abc1234) | Yes | Yes (direct) | Yes |
Branch (main) | No (mutable) | No | No |
SHA tags are the only strategy that's both immutable and directly traceable. Given a running container, you can find the exact commit without consulting any external system.
Security Scanning
The security workflow runs on every push, every PR, and weekly on a schedule:
- Trivy scans both Docker images for CVEs (CRITICAL and HIGH severity)
- Hadolint lints both Dockerfiles for best practices
- CodeQL performs SAST (Static Application Security Testing) on C# and TypeScript
dependency-review-actionflags new vulnerable dependencies introduced in PRsdotnet list package --vulnerablechecks NuGet packagesnpm auditchecks npm packages
Results upload to GitHub's Security tab as SARIF reports. You see vulnerabilities in the same UI where you review code.
Why Multiple Scanners?
| Scanner | What It Finds | Layer |
|---|---|---|
| Trivy | OS package CVEs, library CVEs | Container image |
| Hadolint | Dockerfile anti-patterns | Build definition |
| CodeQL | Code-level vulnerabilities (SQLi, XSS) | Source code |
| dependency-review | Newly introduced vulnerable deps | Pull request |
| npm audit / dotnet vulnerable | Known package vulnerabilities | Package manifest |
No single scanner covers everything. Trivy finds CVEs in Alpine packages but not in your TypeScript logic. CodeQL finds code vulnerabilities but not outdated base images. Layering them provides defense in depth.
Checkpoint: Run Security Scans Locally
Before continuing, verify you can run the same security scans locally:
docker run --rm aquasec/trivy image fhirhub-api:ci-test --severity CRITICAL,HIGH
Expected output:
- A table of CVEs (if any) at CRITICAL or HIGH severity. Zero results is ideal. The scan itself should complete without errors
docker run --rm -i hadolint/hadolint < FhirHubServer/src/FhirHubServer.Api/Dockerfile
Expected output:
- Dockerfile best-practice warnings (if any). Common ones include pinning package versions and combining
RUNcommands. No output means the Dockerfile follows all best practices
cd FhirHubServer && dotnet list package --vulnerable
Expected output:
- Lists any NuGet packages with known vulnerabilities. Ideally shows
No vulnerable packages found
cd frontend && npm audit
Expected output:
- Lists any npm packages with known vulnerabilities.
found 0 vulnerabilitiesis the goal
If something went wrong:
- If Trivy can't pull, ensure Docker is running and you have internet access
- If
dotnet list package --vulnerableisn't recognized, update to .NET SDK 7.0+ which includes this command - Any CRITICAL findings should be addressed before merging to main
What's Next
In Part 16, we'll package FhirHub for Kubernetes using Helm -- an umbrella chart with sub-charts for each service, a library chart for shared templates, and values files that cleanly separate dev, staging, and production configurations.