Data Center Facility Systems
Cooling, power distribution, racks, and building intelligence — everything inside the shell.
What this layer does
An AI rack today draws 100–130 kW; the next generation (Blackwell Ultra, Rubin) targets 250 kW+ per rack. That’s 10–25× the density of a traditional enterprise rack. Air cooling tops out around 30–40 kW. Everything above that requires liquid — direct-to-chip cold plates or full immersion. This is the single biggest mechanical change in data center design in 20 years, and it’s minting winners in cooling, busway, and switchgear.
This layer is where the “AI capex” thesis becomes most tangible: the dollar content per MW of an AI build is roughly 2–3× that of a traditional hyperscale data center, with most of the uplift in cooling, power distribution, and electrical gear.
Sub-categories
Still ~70&%2B of installed base; declining share of new AI builds but very large maintenance market.
The dominant approach for Blackwell-class racks. The single highest-growth sub-category in this layer.
Single-phase and two-phase immersion. Niche today, but plausible for the densest racks.
Bringing AC/DC power from the room into the rack. Higher-density designs needed for 100kW+ racks.
Lithium-ion is replacing VRLA batteries; static UPS replacing rotary. Big refresh cycle.
Medium-voltage gear that takes utility power and steps it down for the DC. 2–3 year lead times for transformers.
Natural-gas and diesel standby. Multi-year backlogs at Cummins / Caterpillar.
The mechanical chassis. Increasingly liquid-ready and pre-integrated with manifolds.
The software brain of the data center. Asset management, power monitoring, thermal mapping.