Real-Time Hospital Capacity Dashboard: FHIR + Streaming Data Architecture
IntegrationReal-timeHealthcare

Real-Time Hospital Capacity Dashboard: FHIR + Streaming Data Architecture

JJordan Hale
2026-05-24
20 min read

A reference architecture for a real-time hospital capacity dashboard using Kafka, FHIR, ADT events, bed sensors, OR schedules, and telehealth queues.

Hospital capacity is no longer a periodic reporting problem; it is a live operations problem. When bed occupancy changes by the minute, when an OR case slips, when an ED boarder arrives, or when telehealth demand spikes, leaders need a dashboard that reflects the truth now—not at midnight. This guide presents a reference architecture for a real-time hospital capacity dashboard built on middleware observability for healthcare, streaming infrastructure, and FHIR resources, with specific attention to ADT events, bed sensors, OR schedules, and telehealth queues. For teams evaluating modern interoperability approaches, it also borrows from the implementation patterns described in Veeva CRM and Epic EHR integration and the market trend toward cloud-based capacity tooling highlighted in the hospital capacity management market outlook.

The central idea is simple: ingest operational events continuously, normalize them into interoperable resources, and expose a decision-grade view of current and near-future capacity. In practice, that means combining HL7 ADT messages, FHIR Patient/Encounter/Location/Bed data, sensor feeds from beds and environmental systems, OR schedule sources, and telehealth waiting lists into a streaming backbone such as Kafka. If you are building a broader interoperability stack, you may also find value in our guides on cloud-native vs hybrid for regulated workloads and environment, access control, and observability for teams, because real-time capacity programs live or die by governance and operational discipline.

Why Real-Time Capacity Dashboards Matter Now

Capacity management is an operations system, not a report

Most hospital teams still rely on a stack of spreadsheets, manual huddles, and delayed reports to answer the same questions: How many staffed beds are available? Which units are full? Where are the discharge delays? What is the OR backlog? The problem is not just inefficiency; it is that decisions made against stale data compound risk across the entire patient journey. A real-time dashboard reduces the lag between physical reality and operational response, which is exactly why the market for hospital capacity management solutions is expanding rapidly. Reed Intelligence estimates the market at USD 3.8 billion in 2025 and projects growth to around USD 10.5 billion by 2034, driven by real-time visibility, cloud adoption, and predictive analytics.

Executives want throughput; clinicians want less friction

Capacity visibility is not a vanity metric. It shapes ambulance diversion decisions, discharge prioritization, elective surgery timing, staffing calls, and transfer coordination. A charge nurse needs to know whether an inpatient bed is truly ready; an OR manager needs to know whether a downstream unit can absorb post-op admissions; and a telehealth operations lead needs to know whether virtual queues can safely absorb overflow. When all of those groups see the same live operational picture, the institution can move from reactive firefighting to managed flow. For teams designing the interface layer, the lessons from developer kits and adoption apply surprisingly well: clarity, fast onboarding, and strong defaults matter as much in healthcare dashboards as they do in technical platforms.

Streaming is the right fit for this problem

Traditional ETL is excellent for nightly reporting, but hospital capacity decisions happen in seconds and minutes. Kafka-style event streaming gives you durable ingestion, fan-out to multiple consumers, replayability for debugging, and the ability to build both operational dashboards and downstream analytics from the same event log. That architecture also helps with auditability, which is essential in regulated environments. If you want a mental model for event-driven monitoring under pressure, see how sub-second automated defenses are designed around rapid detection and response loops; the pattern is similar even if the domain is healthcare instead of cybersecurity.

Reference Architecture: From ADT Events to Capacity Intelligence

Layer 1: source systems and signal types

The dashboard should not depend on a single source of truth, because no single source exists in real hospital operations. ADT events arrive from the EHR and describe admissions, discharges, transfers, cancellations, and patient movements. Bed sensors can contribute occupancy, call-light, room turnover, or environmental readiness signals. OR scheduling systems publish planned cases, delays, room assignments, and cancellations. Telehealth queues represent virtual demand, triage status, clinician availability, and expected response time. Together these signals define current capacity more accurately than any one system can on its own.

Layer 2: streaming backbone and normalization

Kafka acts as the event backbone, with topics organized by domain and lifecycle stage: raw inbound events, normalized clinical/operational events, capacity projections, and dashboard-ready aggregates. A practical design is to preserve the raw payloads in immutable topics while also transforming them into canonical FHIR-aligned events. For example, an ADT admit can become an Encounter lifecycle update, a bed assignment can map to a Location or Bed state change, and an OR case schedule can update a ServiceRequest or Encounter-adjacent planning record. This normalization layer is where integration quality is won or lost, and it is where strong observability is essential; see middleware observability for healthcare for patterns on tracing cross-system journeys.

Layer 3: operational data model

A robust model should include three timing concepts: current state, committed near-future state, and forecast state. Current state reflects what is happening now, such as beds occupied and ORs running. Committed near-future state reflects scheduled discharges, planned surgeries, and telehealth appointments. Forecast state uses rules or predictive models to estimate congestion, staffing gaps, and bottlenecks. The strongest systems keep these separate so users do not confuse known facts with modeled probabilities. If your platform includes predictive components, the framing in predictive analytics for future-proofing visual identity is a useful analogy: predictions are valuable, but only when clearly distinguished from live measurements.

FHIR Mapping Strategy for Hospital Capacity

Core FHIR resources to anchor the model

FHIR is not just a transport standard here; it is the interoperability contract that lets capacity data become reusable across applications. At minimum, you should map Patient, Encounter, Location, Organization, Practitioner, and where applicable Appointment, Schedule, Slot, and ServiceRequest. For bed management, Location often carries the unit/room/bed hierarchy, while Encounter links a patient to a physical space and care context. For OR scheduling, Appointment and Schedule/Slot can represent planned utilization, while ServiceRequest or Procedure helps distinguish booked work from completed work. The practical lesson from the Veeva/Epic integration guide is that interoperability works best when you preserve source semantics and map carefully instead of forcing everything into one generic schema.

How to represent bed state and turnover

Bed state is more nuanced than occupied versus empty. Operationally, you need states like available, occupied, cleaning, isolation hold, blocked, out of service, and pending admit. In FHIR terms, many teams model the physical bed as a Location and then attach state via extension fields or a linked operational resource, because the base standard does not fully capture every bed lifecycle nuance out of the box. The dashboard should compute derived metrics such as staffed availability, physical availability, and discharge-ready availability. Those distinctions matter because a bed can be physically empty but not operationally usable, which is why direct sensor input should be combined with staffing context and housekeeping status.

ADT to FHIR: event translation examples

An ADT A01 admit can create or update an Encounter, an A02 transfer can update the patient’s Location, and an A03 discharge can close the encounter and release downstream capacity estimates. For example:

<ADT A01> --> Encounter.status = in-progress
         --> Encounter.location = MedSurg Unit 4A Bed 12
         --> Location occupancy aggregate +1

The key is to avoid treating ADT as a flat feed. Each event should be normalized with identifiers, timestamps, source system metadata, and idempotency controls so duplicate or delayed messages do not corrupt the dashboard. If you need a broader architecture lens, our guidance on cloud-native vs hybrid can help you decide where FHIR servers, streaming brokers, and downstream analytics should live in a regulated environment.

Kafka Event Design: Topics, Keys, and Guarantees

Topic strategy and partitioning

Good topic design keeps the system understandable and performant. A practical split is raw.adt, raw.bed-sensor, raw.or-schedule, raw.telehealth, norm.encounter, norm.location-state, norm.capacity-fact, and agg.capacity-by-unit. Partition keys should be chosen to preserve ordering where it matters, typically by patient, location, or unit depending on the stream. For ADT, patient or encounter ID is often appropriate; for bed sensors, the bed ID or room ID is better; for OR schedules, the surgical suite or case ID may be more useful. Partitioning by the wrong key can cause apparent inconsistency in the dashboard even when the underlying system is healthy.

Delivery semantics and idempotency

Hospitals need operational correctness more than theoretical elegance. At-least-once delivery is usually the practical default, so consumers must be idempotent. That means each event needs a stable event ID, source timestamp, and version or sequence number where possible. If a transfer message arrives twice or out of order, the consumer should be able to reconcile the latest known state without double-counting occupancy. A good pattern is to store a per-entity state snapshot in a compacted topic and let stream processors recompute aggregates from that state. For event integrity and response speed under load, the discipline described in shipping, scaling, and SaaS is relevant: resilience comes from predictable operational rules, not heroics.

Streaming computations that matter to operations

Not every calculation belongs in the dashboard UI. Compute unit occupancy, staffed bed availability, OR utilization, average turnaround time, discharge lag, and telehealth queue depth in the stream layer so the UI only renders ready-made operational facts. Windowed aggregations can calculate the last 5 minutes, 15 minutes, and 24-hour trends. Event-time processing is especially important in healthcare because source systems may be delayed. Late-arriving events should update history without rewriting the core live state in a confusing way. If you are designing live operational views at scale, the thinking in scaling live events without sacrificing quality maps closely to the capacity dashboard challenge.

Implementation Checklist by Domain

ADT and patient flow

Start by establishing a master event contract for admissions, transfers, discharges, and status changes. Define which system owns the canonical encounter ID, how merges and corrections are handled, and what latency you can tolerate for each event type. Then build validation rules for demographics, timestamps, and location references. The dashboard should explicitly surface data freshness so users can see whether a unit is live, delayed, or degraded. A reliable implementation also includes dead-letter handling and replay tooling so interface failures do not silently erase capacity changes.

Bed sensors and environmental readiness

Bed sensors are only useful if they are contextualized. Raw sensor occupancy does not tell you whether the room is clean, whether the bed is staffed, or whether an isolation protocol is active. Use sensor events as corroborating signals, not as the sole truth source. Combine sensor data with housekeeping workflows and nursing status to compute bed readiness. This is where observability principles matter: if a bed sensor says the bed is empty but the EHR still shows an active encounter, the system should flag the discrepancy and route it for review. For teams building trustworthy operator workflows, our guide on access control flags and auditability provides a useful model for hiding complexity without hiding evidence.

OR schedules and procedural flow

Operating room scheduling is a capacity domain of its own. The dashboard should show scheduled cases, actual start times, expected duration, delays, cancellations, and post-anesthesia care unit impact. Near-future bed demand often depends on procedure volume, so the OR feed should influence inpatient forecast lanes. Include a mechanism to mark a case as soft-booked versus hard-committed, because elective schedules change frequently. If your institution already uses appointment-driven workflows elsewhere, the thinking in resource allocation and scheduling optimization is a reminder that timing and priority rules drive most perceived value.

Telehealth queues and virtual overflow

Telehealth is increasingly part of the capacity conversation because it absorbs low-acuity demand, follow-up visits, and triage traffic that might otherwise crowd physical sites. The dashboard should show virtual queue depth, response SLA, clinician availability, and conversion rates to in-person care. This matters because telehealth is not just a convenience layer; it is a load-balancing mechanism for the system. Treat the queue as a first-class capacity source and you can create a more honest picture of total demand. For inspiration on designing high-conversion, low-capacity digital experiences, the principles in limited-capacity live experience design offer a useful analogue.

Dashboard UX: What Operators Need to See First

From data explosion to decision hierarchy

A hospital capacity dashboard fails when it looks like a data warehouse. Operators need an answer hierarchy: what is full now, what is about to become full, what is blocked, and what action is recommended. The top level should use simple state indicators and trend arrows, while drill-down views expose the raw event trail. This “summary first, evidence second” pattern improves trust because users can reconcile the dashboard with their own local knowledge. Similar clarity principles show up in developer tools; see pro work setup decisions for how professionals choose tools that reduce cognitive overhead.

Make exceptions visible, not hidden

Users do not need perfect consistency; they need explainable exceptions. If a bed is marked occupied but the sensor says empty, show both. If the OR schedule says a case is planned but the room is idle and the team is waiting on consent, show the blocking reason. If telehealth is overloaded, show the queue age distribution rather than a single number. The interface should be built around operational trust, not polished ambiguity. This is also why the dashboards should include freshness labels, source badges, and last-reconciled timestamps.

Design for role-based views

An executive view, a nurse manager view, an OR coordinator view, and an IT integration view should not be identical. Executives need trend summaries, throughput KPIs, and risk alerts. Frontline managers need actionable lists, per-unit drilldowns, and discrepancy resolution tools. Integration teams need topic health, event lag, dead-letter counts, and FHIR mapping errors. This separation is a practical way to avoid overwhelming users while still keeping the architecture transparent. If you want a strong example of how audience-specific framing improves adoption, developer experience design offers the same lesson in a different domain.

Security, Compliance, and Governance

Least privilege and auditability are non-negotiable

Capacity dashboards often expose PHI-adjacent context even when they do not show full clinical detail. That means access control must be deliberate. Implement role-based access, field-level masking where necessary, and comprehensive audit logs for every query and export. Production systems should also preserve event provenance: who sent the event, which system version generated it, and when it was processed. Security-sensitive engineering teams may appreciate the mindset in passkey-based access control, because it emphasizes reducing authentication friction while strengthening assurance.

HIPAA, interoperability, and data minimization

Do not send more data than the dashboard needs. The capacity use case usually does not require exhaustive clinical narratives, only the identifiers and operational status needed to determine where resources stand. Use FHIR carefully, especially when building cross-system consumers, and prefer tokenized or pseudonymized identifiers where feasible in non-clinical layers. Maintain clear retention policies for raw event streams versus aggregated operational snapshots. The broader policy lesson from healthcare interoperability is that compliance and usability are not opposites; they are design constraints that can reinforce each other when the data model is disciplined.

Cloud, hybrid, and resilience choices

Many hospitals will land on hybrid architecture: streaming and dashboard layers in the cloud, with connectors at the edge or on-prem to interface with EHR, sensors, and scheduling systems. That approach balances regulatory concerns with operational scalability. The important thing is to keep the event contract stable across deployment boundaries. For regulated systems, a clear decision framework helps avoid endless platform debates, which is why cloud-native vs hybrid decision making belongs in the architecture review. If your organization is planning broader data platform modernization, you may also find value in comparing your implementation governance with lifecycle observability and access control patterns for complex systems.

Data Model Example and Comparison Table

Reference entities and fields

The table below shows a simplified operational model you can adapt. It is intentionally pragmatic rather than academically pure, because the best capacity systems are shaped by what operators can maintain under pressure. The goal is to align FHIR semantics with event streaming realities so that the dashboard remains useful during normal operations and surge conditions alike.

DomainPrimary SourceFHIR AnchorKey FieldsOperational Use
Admissions/Transfers/DischargesADT interfaceEncounterstatus, subject, location, periodLive patient flow and occupancy
Bed availabilityBed sensor + housekeepingLocationphysical state, staffed state, readinessAvailable vs blocked bed counts
OR scheduleSurgical scheduling systemAppointment / Schedule / Slotplanned start, delay, suite, statusElective and procedural capacity planning
Telehealth queueVirtual care platformAppointment / ServiceRequestqueue depth, wait time, SLA, triageOverflow management and virtual demand
Aggregated capacityStream processorObservation or custom summaryunit occupancy, turnover, forecastDashboard KPIs and alerting

That table can be implemented as canonical event classes and materialized views. The point is not to overload FHIR with every operational nuance, but to use FHIR where it adds interoperability and supplement it where the operational domain needs extra precision. This philosophy is consistent with real-world integration work discussed in the Epic integration technical guide, where mapping decisions must preserve meaning across systems.

Implementation Checklist: What to Build in Order

Phase 1: define the canonical model

Start by inventorying all source systems and agreeing on identifiers, timestamps, ownership, and latency expectations. Define the minimal set of entities you need for live operations: patient, encounter, location, bed, room, unit, OR suite, schedule slot, and telehealth queue item. Establish source-of-truth rules for conflicts, such as when the EHR and bed sensor disagree. Then write test fixtures using realistic event samples so engineers and clinical stakeholders can validate semantics together.

Phase 2: build the streaming pipeline

Implement raw ingestion, schema validation, normalization, and output topics. Add monitoring for consumer lag, event failure rate, and source freshness. Build replay tools before the dashboard ships, because debugging capacity issues without replay is slow and expensive. Also plan for dual paths: one for real-time operator views and one for historical analytics. If you are formalizing delivery practices, the operational lessons in scaling shipping and SaaS systems are surprisingly applicable here.

Phase 3: design the dashboard and alerting workflow

Build the top-level view first, then add drill-downs and exception views. Alert only on actionable states, such as unit saturation, prolonged discharge delay, OR block overruns, or telehealth queue breaches. Avoid alert fatigue by ranking issues by impact and confidence. Finally, integrate human workflow actions directly into the dashboard so teams can annotate, acknowledge, or escalate without leaving the view. The best operational dashboards do not just describe capacity; they help teams change it.

Common Failure Modes and How to Avoid Them

Stale data disguised as real time

The biggest mistake is showing a polished dashboard that is quietly several minutes behind reality. If freshness is unclear, users will stop trusting the system and return to hallway conversations. Show per-source lag, last event timestamp, and last successful reconciliation time. Where possible, create “degraded mode” indicators rather than silently continuing with partial data. This is one of the clearest lessons from cross-system observability: visibility is a product feature, not just an engineering metric.

Over-modeling before proving value

Another trap is building a highly sophisticated predictive engine before the core live view is stable. The first value comes from accuracy, timeliness, and usability, not from complex machine learning. Start with transparent rules and only add forecasting where it clearly improves decisions. If a rule-based occupancy projection can already prevent bottlenecks, that is often better than a black-box model that no one trusts. Predictive features can come later, once the operational baseline is strong.

Ignoring local workflow variation

Hospitals are not identical, and neither are units. One facility’s “available bed” may differ from another’s because of staffing, isolation requirements, or discharge workflow. Build the architecture to support local configuration without breaking the canonical model. Also account for time-zone, shift-change, and weekend effects. A dashboard that works beautifully in one service line but fails in another is not a platform; it is a pilot.

FAQ

How is a real-time hospital capacity dashboard different from a BI report?

A BI report summarizes historical data after the fact, while a real-time capacity dashboard is operational infrastructure. It must ingest live events, reconcile discrepancies, and support immediate decisions about beds, ORs, and queues. The dashboard should also expose freshness, source confidence, and exception states so teams can trust what they see.

Why use Kafka instead of direct API polling?

Kafka is better suited to high-frequency, multi-consumer operational data because it provides durable event storage, replay, fan-out, and decoupling between producers and consumers. Polling can work for smaller integrations, but it becomes fragile when multiple workflows depend on the same live event stream. For capacity management, the ability to replay and reconcile events is a major advantage.

What FHIR resources are most important for capacity?

Encounter, Location, Patient, Organization, Practitioner, Appointment, Schedule, Slot, ServiceRequest, and sometimes Procedure are the most useful anchors. Encounter and Location are especially important for admission flow and bed assignment. Appointment and Schedule become critical for OR and telehealth capacity forecasting.

How do you handle bad or duplicate ADT messages?

Use idempotent event processing, stable event identifiers, and source timestamps. Keep raw events immutable, then derive normalized states in downstream processors that can resolve duplicates and out-of-order arrivals. When reconciliation is uncertain, the dashboard should flag the discrepancy instead of guessing silently.

Can this architecture support predictive capacity alerts?

Yes, but prediction should sit on top of a reliable live data foundation. Start with deterministic calculations for occupancy, backlog, and utilization, then add forecast models for discharge probability, surge risk, or OR slippage. Predictions are most useful when the underlying live state is already accurate and transparent.

Should bed sensors be treated as the source of truth?

No. Bed sensors are valuable corroborating signals, but they should not replace clinical and operational systems of record. Combine sensor data with housekeeping status, staffing, and EHR encounter state so the dashboard can distinguish between physically empty and operationally available.

Conclusion: Build for trust, speed, and interoperability

A real-time hospital capacity dashboard is only valuable if it turns fragmented operational signals into shared, timely truth. The winning architecture pairs Kafka-style streaming with FHIR-based interoperability, then adds observability, governance, and role-aware design so the system can survive real-world ambiguity. In practice, that means treating ADT, bed sensors, OR schedules, and telehealth queues as complementary inputs to one capacity model, not as isolated integrations. If your organization is ready to evaluate platform options, the market momentum toward real-time, cloud-based tools is clear, but the technical moat comes from how well you normalize events, preserve provenance, and make the data operationally usable.

For a broader planning lens, revisit our guides on cloud-native vs hybrid regulated workloads, middleware observability in healthcare, and access control and auditability. Those architectural choices will shape whether your hospital capacity dashboard becomes a trusted command center or just another screen people stop using.

Related Topics

#Integration#Real-time#Healthcare
J

Jordan Hale

Senior Healthcare Integration Strategist

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

2026-05-13T18:00:31.956Z