DL
Back to Blog
TechFebruary 3, 2026·5 min read

CI/CD Pipelines with GitHub Actions

FhirHub's GitHub Actions CI/CD pipelines: reusable workflows, pull request validation, Docker image publishing, and security scanning to catch vulnerabilities before production.

D

David Le

CI/CD Pipelines with GitHub Actions

By David Le -- Part 15 of the FhirHub Series

Docker images are only useful if they're built, tested, and deployed automatically. Manual builds introduce human error. Manual deployments skip tests. In healthcare, both are unacceptable.

This post covers the GitHub Actions CI/CD pipelines I built for FhirHub -- reusable workflows that eliminate duplication, a CI pipeline that validates every pull request, a release pipeline that pushes images to Docker Hub, and security scanning that catches vulnerabilities before they reach production.

Why GitHub Actions?

GitHub Actions vs. Jenkins vs. GitLab CI vs. CircleCI

PlatformSelf-HostedYAML ConfigMarketplaceFree Tier
GitHub ActionsOptionalYes16,000+ actions2,000 min/month
JenkinsRequiredJenkinsfilePlugins (fragmented)Free (self-host)
GitLab CIOptionalYesTemplates400 min/month
CircleCINoYesOrbs6,000 min/month

GitHub Actions won because FhirHub's source is already on GitHub, the marketplace has first-party actions for Docker, .NET, and Node.js, and the reusable workflow feature avoids duplicating pipeline logic.

Jenkins is the most flexible option, but it requires maintaining a build server -- patching, securing, scaling. For a project hosted on GitHub, that's overhead without benefit. GitLab CI is strong but would mean moving the repository or mirroring it. CircleCI offers more free minutes but lacks the deep GitHub integration (status checks, code scanning, SARIF uploads).

Reusable Workflows

Three reusable workflows eliminate duplication across pipelines. Each uses workflow_call so they can be invoked from other workflows like functions.

reusable-docker-build.yml

Builds, tags, pushes, and scans a Docker image:

on:
  workflow_call:
    inputs:
      context:
        required: true
        type: string
      dockerfile:
        required: true
        type: string
      image-name:
        required: true
        type: string
    secrets:
      DOCKERHUB_USERNAME:
        required: true
      DOCKERHUB_TOKEN:
        required: true

Called from release.yml twice -- once for the API, once for the frontend. Same build logic, different inputs.

The workflow uses:

  • Docker Buildx for multi-platform support and advanced caching
  • docker/metadata-action for automated tag generation (SHA, semver, latest)
  • docker/build-push-action with GitHub Actions cache (type=gha) so layers persist across runs
  • Trivy for CVE scanning after push

Why Reusable Workflows vs. Composite Actions vs. Copy-Paste?

ApproachSecrets AccessFull Job ControlMaintainability
Copy-paste YAMLYesYesPoor -- changes in N places
Composite actionsNo (workarounds)No (single step)Good
Reusable workflowsYes (native)Yes (full jobs)Best

Reusable workflows are the only option that supports secrets natively and gives full job control (multiple steps, services, matrices). Composite actions can't access secrets directly and run as a single step, which limits what you can do.

Checkpoint: Verify Workflow Files Exist

Before continuing, verify the workflow files are in place:

ls .github/workflows/

Expected output:

  • Should show ci.yml, release.yml, security-scan.yml, reusable-docker-build.yml, reusable-dotnet.yml, reusable-node.yml
grep 'workflow_call' .github/workflows/reusable-docker-build.yml

Expected output:

  • Should output a line containing workflow_call, confirming the workflow is callable from other workflows

If something went wrong:

  • If files are missing, check that you're on the correct branch and the .github/workflows/ directory exists at the repository root
  • If workflow_call isn't found, the workflow won't be reusable -- it needs on: workflow_call: in its trigger section

reusable-dotnet.yml

Restores, builds, tests, and uploads coverage for .NET projects:

  • Uses NuGet caching via actions/setup-dotnet to avoid re-downloading packages
  • Builds in Release configuration to catch Release-only issues
  • Uploads test coverage as a build artifact for downstream analysis

reusable-node.yml

Installs, lints, typechecks, tests, and builds the frontend:

  • Uses npm caching via actions/setup-node
  • Runs npm run lint, npx tsc --noEmit, and npm run test:coverage as separate steps
  • Uploads the build artifact so downstream Docker builds can verify it succeeded

CI Pipeline (Pull Requests)

Every pull request triggers four parallel jobs:

PR opened/updated
  ├── api-build-test (reusable-dotnet)
  ├── frontend-build-test (reusable-node)
  ├── docker-build-api (build only, no push)
  └── docker-build-frontend (build only, no push)

Docker builds run after tests pass. They verify the Dockerfile works without pushing anything -- catching build failures before merge.

cancel-in-progress: true kills running CI jobs when a new push arrives on the same branch. No wasted compute on outdated commits.

Why Build Docker Images in CI Without Pushing?

A Dockerfile can break independently from the application code. A dependency version change, a missing COPY path, or a new build arg can all cause failures. Building in CI catches these problems before the merge. Pushing would waste registry space on images from unmerged code.

Checkpoint: Test CI Locally

Before continuing, verify the same checks the CI pipeline runs pass on your machine:

cd FhirHubServer && dotnet test --verbosity normal

Expected output:

  • All tests pass. Look for Passed! at the end
cd frontend && npm run lint && npx tsc --noEmit && npm run test:run

Expected output:

  • Lint passes with no errors, TypeScript compiles with no type errors, and all tests pass
docker build -t fhirhub-api:ci-test -f FhirHubServer/src/FhirHubServer.Api/Dockerfile FhirHubServer/
docker build -t fhirhub-frontend:ci-test -f frontend/Dockerfile frontend/

Expected output:

  • Both images build successfully. These are the same Docker builds the CI pipeline runs on every PR

If something went wrong:

  • If dotnet test fails, check that you have the correct .NET SDK version (dotnet --version)
  • If npm run lint fails, run npm run lint -- --fix to auto-fix formatting issues
  • If Docker builds fail locally but code tests pass, the issue is likely in the Dockerfile (missing files, wrong paths)

Release Pipeline (Main Branch)

Merging to main triggers the release workflow:

Push to main
  ├── test-api ──> build-push-api ──┐
  ├── test-frontend ──> build-push-frontend ──┤
  └────────────────────────────────────────> update-manifests

The update-manifests job uses yq to write the new image tag (sha-<commit>) into helm/fhirhub/values.yaml and commits it back. This is the bridge between CI/CD and GitOps -- ArgoCD watches that file for changes.

Why Update Manifests in the Pipeline?

Deployment TriggerGitOps CompatibleAuditableRollback
kubectl apply in pipelineNoPipeline logs onlyRe-run old pipeline
Webhook to ArgoCDPartialArgoCD logsArgoCD revert
Update values.yaml in GitYesFull git historygit revert

Writing the image tag back to Git means every deployment is a commit. You can git log to see what's deployed, git revert to roll back, and git blame to see who triggered it. ArgoCD picks up the change and syncs the cluster.

Image Tagging Strategy

TriggerTags Applied
Push to mainlatest, sha-abc1234
Tag v1.2.31.2.3, 1.2, 1, latest

The SHA tag is immutable -- it always points to the exact code that built it. The semver tags follow Docker conventions for version pinning. Users who want stability pin to a major version (1). Users who want the latest pin to latest or track main.

Why SHA Tags Over Build Numbers?

Tag StrategyImmutableTraceable to CodeUnique
Build number (build-42)YesNo (need lookup)Yes
Timestamp (20240115)YesNoUsually
Git SHA (sha-abc1234)YesYes (direct)Yes
Branch (main)No (mutable)NoNo

SHA tags are the only strategy that's both immutable and directly traceable. Given a running container, you can find the exact commit without consulting any external system.

Security Scanning

The security workflow runs on every push, every PR, and weekly on a schedule:

  • Trivy scans both Docker images for CVEs (CRITICAL and HIGH severity)
  • Hadolint lints both Dockerfiles for best practices
  • CodeQL performs SAST (Static Application Security Testing) on C# and TypeScript
  • dependency-review-action flags new vulnerable dependencies introduced in PRs
  • dotnet list package --vulnerable checks NuGet packages
  • npm audit checks npm packages

Results upload to GitHub's Security tab as SARIF reports. You see vulnerabilities in the same UI where you review code.

Why Multiple Scanners?

ScannerWhat It FindsLayer
TrivyOS package CVEs, library CVEsContainer image
HadolintDockerfile anti-patternsBuild definition
CodeQLCode-level vulnerabilities (SQLi, XSS)Source code
dependency-reviewNewly introduced vulnerable depsPull request
npm audit / dotnet vulnerableKnown package vulnerabilitiesPackage manifest

No single scanner covers everything. Trivy finds CVEs in Alpine packages but not in your TypeScript logic. CodeQL finds code vulnerabilities but not outdated base images. Layering them provides defense in depth.

Checkpoint: Run Security Scans Locally

Before continuing, verify you can run the same security scans locally:

docker run --rm aquasec/trivy image fhirhub-api:ci-test --severity CRITICAL,HIGH

Expected output:

  • A table of CVEs (if any) at CRITICAL or HIGH severity. Zero results is ideal. The scan itself should complete without errors
docker run --rm -i hadolint/hadolint < FhirHubServer/src/FhirHubServer.Api/Dockerfile

Expected output:

  • Dockerfile best-practice warnings (if any). Common ones include pinning package versions and combining RUN commands. No output means the Dockerfile follows all best practices
cd FhirHubServer && dotnet list package --vulnerable

Expected output:

  • Lists any NuGet packages with known vulnerabilities. Ideally shows No vulnerable packages found
cd frontend && npm audit

Expected output:

  • Lists any npm packages with known vulnerabilities. found 0 vulnerabilities is the goal

If something went wrong:

  • If Trivy can't pull, ensure Docker is running and you have internet access
  • If dotnet list package --vulnerable isn't recognized, update to .NET SDK 7.0+ which includes this command
  • Any CRITICAL findings should be addressed before merging to main

What's Next

In Part 16, we'll package FhirHub for Kubernetes using Helm -- an umbrella chart with sub-charts for each service, a library chart for shared templates, and values files that cleanly separate dev, staging, and production configurations.


Find the source code on GitHub Connect on LinkedIn

Related Projects

Featured

FhirHub

A healthcare data management platform built on the HL7 FHIR R4 standard, providing a comprehensive web interface for managing patient clinical data including vitals, conditions, medications, lab orders, and bulk data exports with role-based access control and full audit logging.

Next.js 16
React 19
Typescript
Tailwind CSS 4
+8