Version 1.1

Effective date:

Feb 18, 2026

Hardware and software

Gateway server specifications


Supported Operating Systems
  • Windows Server 2019 

  • Windows Server 2022 

  • Windows Server 2025 


Deployment Architecture

The Gateway supports two deployment models: a single standalone gateway and a High Availability (HA) cluster. The deployment model determines network architecture, hardware redundancy, and failover behavior. All deployments use a centralized gateway model where remote imaging sites route DICOM traffic to the central gateway over site-to-site VPN tunnels. 

Single Gateway

A single gateway deployment consists of one server running all gateway components: the gateway DICOM router, PostgreSQL database, Mirth Connect HL7 engine, and cloud connectivity services. This model is suitable for lower-volume facilities or non-critical workflows where downtime during maintenance or hardware failure is acceptable. 


  • Single server running all services.

  • No automatic failover: downtime occurs during maintenance, updates, or hardware failure.

  • Studies queue locally on the data drive and resume processing after the server returns to service. 

  • Hardware requirements in Section 3 apply directly to the single server. 

  • Network requirements are simplified: VLAN A (Cluster Management) is not required.

High Availability (HA) Gateway Cluster

The HA deployment provides continuous service availability through automatic failover across a multi-node cluster. This is the recommended deployment model for production environments supporting clinical workflows. 

Minimum Cluster Size: 3 Nodes.  

The HA architecture requires a minimum of three (3) physical or virtual server nodes. This is a hard requirement driven by the cluster consensus protocol (etcd/Raft), which requires a majority quorum to elect a leader and commit transactions. With 3 nodes, the cluster tolerates the loss of 1 node while maintaining quorum. A 2-node cluster cannot maintain quorum after a single node failure and will result in a split-brain condition. 


Cluster Components (per node) 

  • DICOM Router

    • DICOM ingest, processing, and cloud transmission 

  • PostgreSQL with Patroni  

    • Streaming replication with automatic leader election 

  • Etcd 

    • Distributed key-value store for cluster consensus

  • Loadbalancer 

    • Virtual IP (VIP) management and connection routing 

  • Mirth Connect 

    • HL7v2 message processing 


Failover Behavior 

When a node fails or is taken offline for maintenance, the loadbalancer redirects traffic to the remaining healthy nodes via the floating Virtual IP (VIP). Patroni automatically promotes a replica to primary for the PostgreSQL database. Clinical endpoints (modalities, RIS/EHR systems) connect to the VIP and require no reconfiguration during failover. The cluster continues to process studies at reduced capacity until the failed node is restored. 


Important: Hardware specifications in Section 3 are per-node requirements sized for normal distributed operation (total workload divided across 3 nodes). During N - 1 failover, the two remaining nodes absorb the full cluster workload at elevated utilization. This degraded capacity is expected and accepted; see Section 3.6 for sizing methodology and failover impact. 

Deployment Comparison 


Single Gateway 

HA Cluster 

Nodes

1

3 (minimum) 

Automatic Failover 

No

Yes 

Maintenance Downtime 

Required

Rolling (zero downtime) 

Database Redundancy 

None (backup/restore) 

Streaming replication 

VIP / Floating IP 

Not applicable 

2 VIPs (VLAN A + VLAN B) 

Network Segments 

VLAN B + VLAN C 

VLAN A + VLAN B + VLAN C 

Recommended For 

Non-critical, low-volume 

Production clinical workflows 

Remote Site Connectivity 

All deployments use a centralized gateway architecture. Remote imaging centers send DICOM studies to the central gateway over site-to-site VPN tunnels terminating on VLAN B (Modality Ingest). The gateway provides a single stable endpoint (VIP in HA deployments, static IP in single gateway deployments) for all remote sources. 

 

VPN tunnel configuration directly impacts DICOM transfer performance from remote sites. Key considerations include MTU sizing (1360-1400 to avoid fragmentation), QoS marking preservation (DSCP EF) across the tunnel, and WAN circuit sizing to account for 10-15% VPN encryption overhead. Latency above 15 ms on the VPN link will reduce single-stream DICOM throughput due to TCP windowing effects. 


For complete VPN requirements including MTU validation procedures, bandwidth sizing, and QoS preservation, refer to Section 3.2.1 of the Gateway Network Requirements specification. 


Hardware Requirements 

Hardware requirements are determined by the number of concurrent studies the gateway must process simultaneously during peak periods. This section provides a quick-reference table by annual volume followed by detailed per-metric specifications sized by concurrent studies. 


For HA clusters: All specifications in this section are per-node requirements. Each of the three cluster nodes must independently meet or exceed these values. 

Quick Reference by Annual Volume

Simplified sizing based on estimated annual study volume.  

Assumes each study will have 5 relevant prior studies transferred.  

For precise sizing, use the concurrent study tables in Sections 3.2 to 3.5. 


Flash storage (SSD or NVMe) is required for all production deployments. HDD does not provide adequate IOPS for clinical workloads. 

Annual Study Volume 
CPUs 

Cores

RAM 
Storage 
Comments 

<10,000 

8

16 GB

250 GB (SSD) 

Not for critical workflows (ER, stroke)

10,000 – 50,000 

8 - 12

16 - 24 GB 

500 GB (SSD) 

~1–2 concurrent studies

50,000 – 100,000 

14 - 18

24 - 32 GB

500 GB - 1 TB (SSD) 

~3–4 concurrent studies

100,000 – 200,000 

18 - 26

48 - 64 GB

2 - 4 TB (SSD) 

~5–7 concurrent studies 

> 200,000 

28 - 36

48 - 64 GB

2 - 4 TB (SSD)

~8–10 concurrent studies

CPU Core Requirements

CPU capacity directly determines the gateway’s ability to process DICOM studies concurrently. Image compression, decompression, pixel manipulation, metadata parsing, and data transformation are CPU-intensive operations that scale linearly with study volume. 


Performance Baseline: Each concurrent study requires approximately 2.5 CPU cores (1.7 cores for DICOM file processing at 75 MB/s, plus 0.8 cores for storage I/O). An additional 4 - 6 cores are required for system services (OS, PostgreSQL, Mirth Connect, load balancer, Patroni, etcd). All core counts assume a minimum base clock speed of 2.1 GHz. Hyperthreading/SMT is beneficial but should not be counted as equivalent to physical cores for sustained workloads. 


For workloads exceeding 10 concurrent studies: (concurrent studies × 2.5) + 5 cores. 

Concurrent Studies

Minimum Cores

Recommended Cores

Status

<1

<8

<8

Not recommended 

1-2

8 - 10

10-12

Minimum

3-4

14 - 16

16-18

Acceptable

5-7

18 - 22

22-26

Recommended

8-10

28 - 32

32-36

Excellent

RAM Requirements 

RAM is consumed by two categories: a fixed ~8 GB allocation for system services, plus ~4 GB per concurrent study for image buffering, decompression, and queue management. The table below shows the total physical RAM required per node. 


For workloads exceeding 10 concurrent studies: 8 + (concurrent studies × 4) GB minimum. Recommended adds ~25% headroom. 

Current Studies

Minimum RAM

Recommended RAM 
Status

<1

< 12 GB 

< 16 GB

Not recommended

1 - 2 

12 - 16 GB

16 - 24 GB 

Minimum

3 - 4

20 - 24 GB

28 - 32 GB

Acceptable

5 - 7

28 - 36

40 - 48 GB

Recommended

8 - 10

40 - 48

56 - 64 GB

Excellent

Fixed System Allocation Breakdown 

The ~8 GB fixed allocation is consumed by the following components. This is included in the totals above.


These are physical RAM requirements. Virtual memory and swap should not be relied upon for production workloads. Larger studies (CT, Mammography with Tomosynthesis) may temporarily exceed the 4 GB per-study estimate during processing. 

Component

Memory

Notes

Window Server OS

2 - 3 GB

Base OS with services 

PostgreSQL (shared_buffers)

1 - 2 GB

Database caching 

Mirth Connect (Java Heap) 

1 - 2 GB

HL7 message processing 

Radflow DICOM Router 

1 GB

DICOM routing engine 

Load Balancer 

256 MG

Load balancer / VIP 

Patroni + etcd 

512 MB

Cluster orchestration 

System Buffer 

~1 GB 

OS file cache, misc 


Storage Capacity

The gateway uses the file system as a transmission queue studies are stored temporarily during processing and removed after successful cloud delivery. Storage is split across two drives. 


System Drive (C:\): Hosts the OS, applications, database, and logs. Minimum 200 GB, recommended 250 GB. 


Data Drive (D:\): Stores DICOM studies in the transmission queue. Sized by queue depth—how many studies must be held during peak ingest or network outages. 

Reference Study Sizes

Modality

Typical Image Count

Average Study Size

MR¹ 

380 images 

~76 MB 

CT² 

600 images 

~180 MB 

MG with Tomosynthesis³ 

Varies 

~1,286 MB 

Queue Capacity by Modality Mix 

Queue Depth 

MR-Heavy Mix 

CT-Heavy Mix 

Mammo/Tomo 

100 studies

~8 GB 

~18 GB 

~129 GB 

250 studies

~19 GB 

~45 GB 

~322 GB 

500 studies

~38 GB 

~90 GB

~643 GB 

1,000 studies

~76 GB 

~180 GB 

~1.3 TB 

2,500 studies

~190 GB 

~450 GB 

~3. 2 TB 

Recommended Data Drive Sizing 

Recommended sizes provide 24–48 hours of queue accumulation during network outages. Facilities with Mammography/Tomosynthesis should size toward the higher end. 

Minimum

Recommended

Queue Capacity

250 GB

500 GB

~500 - 2,500 studies 

500 GB

1 TB

~1,000 - 5,000 studies 

1 TB

2 TB

~2,500 - 10,000 studies 

2 TB

4 TB

~5,000 - 20,000 studies 


Storage Performance

Storage performance is critical during high-volume periods when multiple DICOM studies are being queued, processed, and transmitted simultaneously. Flash storage (SSD or NVMe) is required; traditional HDD does not provide adequate IOPS or latency characteristics for production workloads. 


Baseline: Each concurrent study requires 75 MB/s sequential throughput and 600 random IOPS. Both metrics must be satisfied for optimal performance. 

Concurrent Studies

Sequential Throughput 

Random IOPS 

Status 

< 1 

< 75 MB/s

< 600

Not recommended

1 - 2

75 - 150 MB/s

600 - 1,200

Minimum

3 -4

225 - 300 MB/s

1,800 - 2,400

Acceptable

5 -7

375 - 525 MB/s

3,000 - 4,200

Recommended

8 - 10

600 - 750 MB/s

4,800 - 6,000

Excellent

For workloads exceeding 10 concurrent studies, multiply the per-study baselines (75 MB/s, 600 IOPS) by the number of concurrent studies. 

Disk Latency: Low and consistent read/write latency is critical for database operations, metadata access, and transactional workloads. Elevated or highly variable latency will cause queue buildup, increased processing times, and delayed DICOM ingestion even when throughput and IOPS appear sufficient. Monitor disk latency during peak periods and investigate any sustained spikes. 


Testing: Verify storage performance using CrystalDiskMark (Windows) or FIO (Linux) before deploying the gateway. 


Example: Sizing an HA Cluster

Consider a healthcare system with three imaging facilities routing DICOM studies to a centralized HA Gateway cluster over site-to-site VPN tunnels: 


  • Facility A: 120,000 exams/year 

  • Facility B: 75,000 exams/year 

  • Facility C: 25,000 exams/year 


The combined annual volume is 220,000 exams. Using the quick reference table (Section 3.1), this corresponds to approximately 5-7 concurrent studies at peak. 

Step 1: Determine Cluster-Wide Requirements 


Start by sizing for the total peak concurrent workload as if it were a single system. Using the tables in Sections 3.2-3.5 for 5-7 concurrent studies: 

Resource

Minimum (total)

Recommended (total)

CPU Cores

18-22 

22-26 

RAM

28-36 GB 

40-48 GB 

System Drive (C:\)

250 GB (SSD) 

250 GB (SSD) 

Data Drive (D:\) 

1 TB (SSD) 

2 TB (SSD) 

Sequential Throughput 

375-525 MB/s 

375-525 MB/s 

Random IOPS 

3,000-4,200 

3,000-4,200 

Divide Processing Load Across 3 Nodes

Gateway processing scales linearly across cluster nodes. Under normal operation, the workload is distributed evenly across all three nodes. Each node handles approximately one-third of the total concurrent studies. 

Per-node concurrent studies (normal): 5-7 total ÷ 3 nodes ≈ 2 concurrent studies per node 

Using the per-metric tables from Sections 3.2-3.5, size each node for ~2 concurrent studies. CPU and RAM calculations include the fixed system overhead on every node (this overhead is not divided each node runs its own OS, database, and services). 

Resource

Minimum (total)

Recommended (total)

Notes

CPU Cores

8–10 

10–12 

Incl. 5 cores system overhead 

RAM

12–16 GB 

16–24 GB 

Incl. ~8 GB system/services 

System Drive (C:\)

200 GB 

250 GB 

Fixed per node 

Data Drive (D:\) 

350 GB 

700 GB 

Queue split across nodes 

Throughput 

125–175 MB/s 

125–175 MB/s 

75 MB/s × ~2 studies 

IOPS 

1,000–1,400 

1,000–1,400 

600 × ~2 studies 

Understand Failover Impact (N - 1) 

When one node is lost (hardware failure or maintenance), the two remaining nodes absorb the full cluster workload. Each surviving node handles approximately half of the total load instead of one-third. 

Per-node concurrent studies (N - 1): 5 - 7 total ÷ 2 remaining nodes ≈ 3 - 4 concurrent studies per node 

Resource

Normal (3 nodes) 

Failover (2 nodes) 

Status 

CPU per node 

~2 concurrent 

~3–4 concurrent 

Elevated utilization 

RAM per node 

~16 GB used 

~20–24 GB used 

Within recommended 

Throughput per node 

~150 MB/s 

~225–300 MB/s 

Elevated 

IOPS per node 

~1,200 

~1,800–2,400 

Elevated 

Degraded capacity is expected and accepted during N-1 operation.  


The cluster remains fully functional, but each node operates at higher utilization. Study processing times may increase, and queue depth will grow faster than normal. If the per-node recommended specifications from Step 2 are met, the nodes have sufficient headroom to absorb the N-1 load without failure, though sustained N-1 operation at peak volume may push CPU utilization above 80%. 

Summary: Per-Node Specification for This Example 


WAN bandwidth for each remote facility’s VPN tunnel should deliver the required throughput after accounting for 10–15% VPN encryption overhead. Refer to the Gateway Network Requirements specification for VPN bandwidth sizing, MTU configuration, and QoS preservation. 

Resource

Per Node (recommended) 

Cluster Total (×3) 

Handles N-1? 

CPU Cores 

10-12 

30-36 

Yes (elevated util.) 

RAM 

16–24 GB 

48–72 GB 

Yes 

System Drive 

250 GB (SSD) 

750 GB 

Yes 

Data Drive 

500 GB-1 TB (SSD) 

1.5-3 TB 

Yes  

Throughput 

150-200 MB/s 

450-600 MB/s 

Yes (elevated util.) 

IOPS 

1,200-1,500 

3,600-4,500 

Yes (elevated util.) 


Deployment Notes

  • CPU, RAM, and storage requirements assume all subsystems are adequately provisioned. An undersized component in any area creates bottlenecks regardless of capacity elsewhere. 

  • In HA configurations, each node is sized for its share of the distributed workload (total concurrent studies ÷ 3 nodes). During N - 1 failover, the two surviving nodes each handle approximately half the total load at elevated utilization. This is expected; see Section 3.6 for the sizing methodology and failover impact analysis. 

  • For single gateway deployments, these specifications apply directly to the single server. 

  • Monitor CPU utilization, available memory, and storage capacity during peak periods. Sustained CPU above 80%, available memory below 2 GB, or data drive utilization above 80% indicate additional capacity is needed. 


Network Requirements 

Network throughput is typically the bottleneck for processing data through the gateway. This section provides transfer time estimates and a summary of network segment requirements. For complete network architecture details including VLAN design, QoS configuration, VPN considerations for remote sites, and interface requirements, refer to the Gateway Network Requirements specification. 

Estimated Transfer Times

Estimated transmission times based on network speed. These represent solely the time data is in transit and do not include queuing or processing. It is the responsibility of the gateway administrator to determine safe transmission windows for critical clinical workflows. 


The DICOM network protocol is sensitive to latency. The table assumes a single study in transit; parallel transmissions divide bandwidth proportionally. For remote sites connected via VPN, actual throughput will be further reduced by encryption overhead and WAN latency. 

Mbps

Size 
Average Study 

Mbps 

1MB 

10MB 

100MB 

1GB 

CT1 

MR2 

MG3 

10 

< 1 sec 

8 sec 

1 min 

14 min 

2.5 min 

1 min 

17 min 

20 

< 1 sec 

4 sec 

40 sec 

7min 

1.2 min 

30 sec 

8.5 min 

50 

< 1 sec 

1 sec 

16 sec 

3 min 

30 sec 

12 sec 

3.5 min 

100 

< 1 sec 

< 1 sec 

8 sec 

1.5 min 

15 sec 

6 sec 

1.7 min 

200 

< 1 sec 

< 1 sec 

4 sec 

40 sec 

7 sec 

3 sec 

50 sec 

500 

< 1 sec 

< 1 sec 

1 sec 

16 sec 

3 sec 

1.2 sec 

20 sec 

1000 

< 1 sec 

< 1 sec 

< 1 sec 

7 sec 

1.5 sec 

.6 sec 

10 sec 

Network Segments Summary

The HA Gateway requires three isolated network segments. Each gateway node connects to all three. Single gateway deployments require only VLAN B and VLAN C. 

Segment

Function 

RX (Down) 

TX (Up) 

Latency 

Jitter 

QoS 

VLAN A 

Cluster Mgmt 

10 Gbps 

10 Gbps 

< 1 ms 

< 1 ms 

CS7 

VLAN B

Modality Ingest 

2.5 Gbps+ 

2.5 Gbps+  

< 15 ms 

< 5 ms 

EF 

VLAN C

Cloud Egress 

1 Gbps+ 

1 Gbps+ 

< 80 ms 

< 20 ms 

AF41 

Critical: VLAN A (Cluster Management) is the single most critical network dependency for the HA Gateway. No firewall, IDS/IPS, or packet inspection of any kind is permitted on this segment. Insufficient bandwidth or added latency on VLAN A directly prevents the system from meeting performance and availability requirements. Database replication lag, consensus timeouts, and split-brain conditions are direct consequences of an underperforming or improperly isolated VLAN A. 



For complete segment requirements, interface connectivity options, VPN considerations for remote modalities, jumbo frame configuration, and risk statements, refer to the Gateway Network Requirements specification. 


Antivirus Requirements 

The use of antivirus on the gateways is encouraged. The gateway routinely creates, moves, and deletes large quantities of files and analyzes these files with subprocesses. Antivirus software may aggressively scan each file and process, interrupting the routing process and causing images to fail to route. 


The following folders and processes must be excluded from antivirus scans for safe operation of the product. 


Folder Exclusions 

  • Installation Data Directory (default: C:\ProgramData\data) 

  • C:\ProgramData\RadFlow 

  • C:\Program Files (x86)\SynthFlow 


Process Exclusions

  • dcmcjpeg.exe 

  • dcmdjpeg.exe 

  • dcmreader.exe 

  • dcmsend.exe 

  • findscu.exe 

  • getscu.exe 

  • movescu.exe 

  • SynthFlowService.exe

  • ServiceUpdater.exe 

  • Storescp.exe 

  • Synthflowscp.exe 


SSL Decryption Bypass

  • *.googleapis.com 

  • oauth2.googleapis.com 

  • healthcare.googleapis.com 

  • storage.googleapis.com 

  • www.googleapis.com  

  • accounts.google.com 


Please reference INT_209_v3_Synthesis Outbound Firewall Requirements for an exhausting list of outbound connectivity requirements 



¹ 600-image CT study with an average size of 180 MB 

² 380-image MR study with an average size of 76 MB 

³ Mammography with Tomosynthesis, average size of 1,286 MB