Insights · Site Selection

AI Data Center Site Selection: What Actually Matters

The criteria that drive site decisions for AI training and inference — ranked by actual impact on outcomes.

Power Capacity: The Binding Constraint

For AI training clusters, power is the primary site selection filter. A 10,000-GPU H100 cluster requires approximately 40–50 MW of IT load. A GB200 NVL72 cluster at equivalent scale requires 80–120 MW. Sites that cannot deliver a credible path to 100+ MW consumer ATR do not qualify for hyperscale AI training consideration — regardless of other attributes.

This is why the Resita site's 650 MVA adjacent substation is the leading qualification. It proves the physical infrastructure exists to support a large consumer ATR application. The ATR outcome (6–12 months) then defines the exact allocation.

Cooling: The Hidden Differentiator

GPU compute density has increased 10–100× in five years. NVIDIA H100 DGX produces ~10 kW/rack. GB200 NVL72 exceeds 120 kW/rack. Air cooling is physically inadequate at these densities. Sites without liquid cooling infrastructure — or access to the water volumes required for evaporative/adiabatic systems — cannot support next-generation AI hardware.

The Bârzava river adjacent to the Resita site provides 3.63 m³/s (13,068 m³/h) mean annual flow. This gives a developer the raw resource to build liquid-cooled infrastructure at scale, with a meaningful safety margin above ASHRAE TC 9.9 guidance for makeup water requirements.

Latency Tolerance: AI Training vs Inference

Network latency matters — but differently for different workloads:

  • AI training is latency-tolerant. Foundation model training runs for weeks or months on isolated GPU clusters. User latency to the training cluster is irrelevant. What matters: power, cooling, cost, jurisdiction.
  • AI inference is latency-sensitive. Real-time applications (chatbots, recommendation systems, fraud detection) need <100ms response time from user location. This pushes inference toward major population centres.

For AI training — the dominant use case for hyperscale GPU clusters currently being built — Romania's relative distance from Amsterdam or Frankfurt is irrelevant. Resita is well-suited for training; less suited for latency-critical inference serving Western European users directly.

Jurisdiction: EU, EU-Adjacent, or Other

Sovereign AI projects, regulated financial institutions, and enterprise cloud contracts increasingly mandate EU data residency. Romania satisfies this — with no ambiguity, unlike UK, Switzerland, or other EU-adjacent jurisdictions. For operators targeting European enterprise and government customers, Romania's EU membership is a hard requirement check, not a nice-to-have.

How Resita Scores

CriterionScoreEvidence
Power capacity✓✓650 MVA adjacent · ATR initiated
Cooling✓✓Bârzava: 3.63 m³/s · 50× margin
EU jurisdiction✓✓Full EU member · GDPR
Electricity OPEX✓✓~€0.14/kWh · 26% below EU avg
Land availability✓✓Industrial zone · no rezoning
Network density~Regional · adequate for training
Related: AI Training use case · Resita vs Bucharest · Why Romania