The AI Infrastructure Stack
Overview  /  Tier IV Compute Hardware
Layer 10

Server & Rack Integration

The companies that physically build the boxes hyperscalers buy.

What this layer does

Nvidia ships GPUs; someone has to build them into a server, plumb in liquid cooling, certify the rack, and roll it onto a truck. This is what server OEMs and Taiwanese ODMs do — and at GB200 NVL72 scale, “the rack” is now the actual unit of sale, weighing 1.5 tons and costing ~$3M each.

The economics here are thin (single-digit operating margins) but the volume is huge, and integration complexity is rising fast. The structural debate is: do ODMs (Foxconn, Quanta, Wiwynn) keep eating share from US OEMs (Dell, HPE) as hyperscalers move toward whitebox? Super Micro is the chaos variable.

Sub-categories

Analysis coming soon — will cover: ODM vs. OEM share trajectory, SMCI accounting and customer-concentration risk, AMD’s ZT Systems acquisition as a rack-scale play, $/GPU integration value capture.