The AI infrastructure supercycle has a cooling problem. Nvidia's H100 GPU draws 700W; the GB200 NVL72 rack system draws approximately 120kW per rack — roughly 10–15× the power density of a conventional compute rack. Air cooling, which has served data centers for four decades, is physically incapable of handling these loads at scale. The transition to direct liquid cooling (DLC) is no longer optional — it is a technical prerequisite for the next generation of AI infrastructure.
The Power Density Problem
Air cooling works through convection — fans move cool air across heat sinks, and warm air is exhausted. The physics constrain maximum heat dissipation to approximately 20–30kW per rack in high-performance configurations. Beyond this threshold, the air volume and velocity required become impractical — creating noise, pressure, and airflow channeling problems that degrade cooling efficiency.
The Nvidia GB200 NVL72 system — 72 Blackwell GPUs in a single rack — draws 120kW continuously. The H100 DGX pods run at 60–80kW. These are not edge cases; they are the mainstream AI training configurations being deployed at scale by every major hyperscaler. The industry crossed the air cooling threshold in 2024 and there is no going back.
Cooling Technologies Compared
| Technology | Max Heat Removal | PUE | Retrofit Complexity |
|---|---|---|---|
| Air cooling | 20–30 kW/rack | 1.5–1.6 | N/A (existing) |
| Rear-door heat exchangers | 30–50 kW/rack | 1.3–1.4 | Low |
| Direct liquid cooling (cold plate) | 50–100 kW/rack | 1.1–1.2 | Medium |
| Full immersion cooling | 100+ kW/rack | 1.02–1.05 | High (new build) |
Supply Chain & Key Players
The liquid cooling supply chain involves four layers: facility-level chilled water infrastructure, rack-level coolant distribution units (CDUs), component-level cold plates attached directly to GPUs/CPUs, and the fluid management systems. Key players across these layers include:
- Vertiv (VRT): The most direct public market beneficiary. Vertiv manufactures CDUs, precision cooling units, and power distribution for data centers. Its liquid cooling order backlog has grown substantially with AI-driven demand.
- Eaton Corporation (ETN): Power management and PDU infrastructure — critical for the electrical side of high-density rack deployments.
- Modine Manufacturing (MOD): Thermal management components, increasingly focused on data center cooling applications.
- CoolIT Systems / Asetek (private/ASETEK.OL): Specialized cold plate and liquid cooling loop manufacturers with deep OEM relationships at Dell, HPE, and Lenovo.
Investment Angle
The liquid cooling buildout is a multi-year capital cycle with strong visibility — hyperscaler capex commitments for AI infrastructure are public and growing. Vertiv (VRT) is the highest-conviction public market expression: it has direct order flow from major hyperscalers, is benefiting from pricing power as supply is constrained, and trades at a valuation that does not yet fully reflect a $20B+ TAM by 2028.
Key risk: the DLC transition timeline is dependent on hyperscaler new-build vs. retrofit decisions. Existing air-cooled facilities will not be immediately replaced — meaning near-term DLC demand is concentrated in new greenfield construction, which is geographically and counterparty concentrated.
This research is for informational purposes only and does not constitute investment advice. Intermarket Universe does not hold positions in any securities mentioned unless disclosed. All estimates are the author's own analysis derived from public information.