Platform Architecture
51+ microservices. 4 gateways. 29 databases. One coherent platform for healthcare quality measurement and care gap detection.
HDIM is a healthcare interoperability platform built on Java 21 and Spring Boot 3.x. It evaluates clinical quality measures (CQL/HEDIS), identifies care gaps, performs risk stratification, and generates quality reports for value-based care contracts. This page provides a technical overview of the platform's architecture across four layers: service organization, data flow, infrastructure, and security.
Service Layer Overview
Services are organized into four categories, each with distinct responsibilities and deployment characteristics.
API Gateways (4 services)
Four specialized gateways handle traffic routing, JWT validation, and trusted header injection. The public gateway serves unauthenticated endpoints (health checks, FHIR metadata). The clinical gateway routes authenticated clinical operations. The admin gateway handles tenant management and system configuration. The data ingestion gateway manages bulk FHIR resource imports with rate limiting and backpressure. All four share a gateway-core module for common authentication and routing logic.
Event Services (4 services)
Dedicated event services implement CQRS and event sourcing patterns. Patient Event Service captures patient lifecycle events. Care Gap Event Service tracks gap status changes. Evaluation Event Service records measure evaluation results. Quality Measure Event Service handles measure definition updates. Each service publishes to Kafka topics and maintains materialized views for read-optimized queries.
Domain Services (30+ services)
Core business logic resides in domain services organized by bounded context. Key services include: Patient Service (demographics, health records), Care Gap Service (gap detection, closure tracking), Quality Measure Service (HEDIS measure definitions), CQL Engine (Clinical Quality Language evaluation), FHIR Service (R4 resource management), Risk Stratification Service (population health scoring), and Reporting Service (quality report generation).
Shared Modules (6+ modules)
Cross-cutting concerns are encapsulated in shared Gradle modules. gateway-core provides authentication filters and routing utilities. domain-common defines shared value objects and tenant context. contract-testing provides Pact test infrastructure. openapi-validation validates API compliance. event-common standardizes Kafka event schemas. test-common provides shared test utilities including TestEventWaiter.
Data Flow
Clinical data flows through four stages, from ingestion to actionable care gap detection.
Clinical data enters the platform as FHIR R4 resources through the data ingestion gateway. Supported formats include individual FHIR resources, FHIR Bundles, and NDJSON bulk imports. HAPI FHIR validates resource structure and profiles. Valid resources are persisted and published to Kafka for downstream processing.
Event services consume ingestion events and maintain materialized views. Patient demographics are denormalized for fast lookup. Clinical resource indexes are updated. Event projections create read-optimized data structures for the evaluation engine.
The CQL Engine evaluates clinical quality measures against patient data. HEDIS measure definitions specify the clinical logic. The engine processes patient populations and produces measure-level and patient-level results. Evaluation events are published for downstream consumption.
Evaluation results feed the Care Gap Service, which identifies patients with unmet quality measure criteria. Gaps are tracked through their lifecycle: open, addressed, closed, and excluded. Risk stratification scores prioritize outreach. Reports are generated for quality submission and operational dashboards.
Infrastructure
The platform runs on four core infrastructure components, each selected for healthcare workload characteristics.
PostgreSQL 16
Primary data store with 29 independent databases. Each service owns its schema lifecycle via Liquibase. 199 changesets with 100% rollback coverage. Tenant isolation enforced at query level with WHERE tenant_id = :tenantId on every query.
Redis 7
Caching layer with HIPAA-compliant TTL enforcement. PHI-containing cache entries expire within 5 minutes. Session state, rate limiting counters, and OAuth state parameters stored with appropriate TTL policies.
Apache Kafka 3.x
Event streaming backbone for CQRS and event sourcing. Tenant-scoped topic partitioning ensures data isolation. Dead letter queues capture failed events for retry and audit. OpenTelemetry trace context propagated through message headers.
Kong API Gateway
Production API gateway providing rate limiting, request transformation, and upstream load balancing. Configured with declarative YAML for reproducible deployments. Health check probes verify upstream service availability.
Security Architecture
Security is enforced at three layers: gateway, service, and data.
Gateway Trust
The gateway validates JWT tokens and injects trusted headers: X-Auth-User, X-Auth-Roles, and X-Tenant-ID. Downstream services trust these headers via TrustedHeaderAuthFilter. This eliminates redundant token validation and reduces request latency.
Role-Based Access Control
Five role tiers provide granular access control. SUPER_ADMIN has full system access. ADMIN manages tenant-level configuration. EVALUATOR runs quality evaluations. ANALYST views reports. VIEWER has read-only access. Every API endpoint is annotated with @PreAuthorize specifying required roles.
Tenant Isolation
Multi-tenancy is enforced at the database query level. Every repository method filters by tenantId. Integration tests explicitly verify that Tenant A cannot access Tenant B's data. Kafka event partitioning ensures tenant-scoped event processing. Cache keys are prefixed with tenant identifiers.
Observability Stack
This architecture was not designed in a single session. It evolved through seven deliberate phases, each addressing problems exposed by the previous phase. Read the Architecture Evolution Timeline for the full story.
Explore the Platform
Technical deep-dives, methodology analysis, and build evidence for healthcare technology leaders.