FAQ · Energy Efficiency
What is PUE (Power Usage Effectiveness)?
The standard metric for data center energy efficiency — and why it matters for AI workloads at scale.
Definition
PUE = Total Facility Power ÷ IT Equipment Power
A PUE of 1.0 means 100% of energy goes to IT equipment — physically unachievable. A PUE of 2.0 means for every watt powering servers, another watt is spent on cooling, lighting, and power conversion losses.
(Uptime Institute 2023)
(Google, Microsoft)
with evaporative cooling
Why Climate Matters
Cooling accounts for 30–40% of a typical data center's energy budget. Cooler ambient air reduces the mechanical refrigeration required to keep server inlet temperatures within ASHRAE A1–A4 thermal class limits (15–45°C inlet).
The Resita site records a mean annual temperature approximately 2–3°C below Amsterdam, Frankfurt, and Dublin — the dominant Western European colocation hubs. This translates directly to lower baseline PUE without additional capital expenditure on cooling infrastructure.
Water Cooling Advantage
With Bârzava river water available (mean flow 3.63 m³/s), a developer can implement evaporative or adiabatic cooling systems. At 30–80 m³/h per 100 MW IT load (ASHRAE TC 9.9 2023), the available flow margin exceeds requirements by more than 50× at mean annual conditions.
Liquid cooling (direct-to-chip or rear-door heat exchangers) combined with river-cooled chillers enables sub-1.2 PUE for AI GPU clusters — relevant for 400W+ per-chip workloads in H100/H200/B200 density configurations.
Relevance to AI Data Centers
Modern AI training clusters (NVIDIA H100 DGX, GB200 NVL) produce 10–100× the heat per rack compared to enterprise servers. Every 0.1 point reduction in PUE at 100 MW IT load saves approximately 8,760 MWh/year — roughly €1.2M annually at Romanian industrial electricity prices (~€0.14/kWh).