The AI Infrastructure Stack
Overview  /  Tier IV Compute Hardware
Layer 09

CPUs, Memory & Storage

Everything in an AI server that isn’t the accelerator itself.

What this layer does

A GPU is useless without memory next to it, a host CPU to feed it, fast storage to stream training data, and on-board voltage regulation to convert 48V down to the sub-1V the silicon actually wants. The sub-categories here are all individually large markets — HBM alone is a ~$40–60B run-rate business with structural shortage — and they ride the same AI capex cycle as the GPU itself, but with different cyclicality and competitive structures.

Sub-categories

Analysis coming soon — will cover: HBM as the “limiting reactant,” the structural memory upcycle, NAND vs. HDD share in AI workloads, MPWR’s exposure to Nvidia VRM redesigns, and ABF substrate capacity adds.