AI Infrastructure

Liquid Cooling & Data Center Infrastructure: The Next Buildout Bottleneck

February 2026 · Issue #2 · 9 min read · Intermarket Universe

The AI infrastructure supercycle has a cooling problem. Nvidia's H100 GPU draws 700W; the GB200 NVL72 rack system draws approximately 120kW per rack — roughly 10–15× the power density of a conventional compute rack. Air cooling, which has served data centers for four decades, is physically incapable of handling these loads at scale. The transition to direct liquid cooling (DLC) is no longer optional — it is a technical prerequisite for the next generation of AI infrastructure.

120kW
GB200 NVL72 Rack Power Draw
10–15× conventional rack
~$20B
Liquid Cooling TAM by 2028
↑ from ~$4B in 2023
1.2–1.4
Target PUE with DLC
vs. 1.5–1.6 for air
2–3yr
Retrofit Lead Time
Existing data centers

The Power Density Problem

Air cooling works through convection — fans move cool air across heat sinks, and warm air is exhausted. The physics constrain maximum heat dissipation to approximately 20–30kW per rack in high-performance configurations. Beyond this threshold, the air volume and velocity required become impractical — creating noise, pressure, and airflow channeling problems that degrade cooling efficiency.

The Nvidia GB200 NVL72 system — 72 Blackwell GPUs in a single rack — draws 120kW continuously. The H100 DGX pods run at 60–80kW. These are not edge cases; they are the mainstream AI training configurations being deployed at scale by every major hyperscaler. The industry crossed the air cooling threshold in 2024 and there is no going back.

Cooling Technologies Compared

TechnologyMax Heat RemovalPUERetrofit Complexity
Air cooling20–30 kW/rack1.5–1.6N/A (existing)
Rear-door heat exchangers30–50 kW/rack1.3–1.4Low
Direct liquid cooling (cold plate)50–100 kW/rack1.1–1.2Medium
Full immersion cooling100+ kW/rack1.02–1.05High (new build)

Supply Chain & Key Players

The liquid cooling supply chain involves four layers: facility-level chilled water infrastructure, rack-level coolant distribution units (CDUs), component-level cold plates attached directly to GPUs/CPUs, and the fluid management systems. Key players across these layers include:

  • Vertiv (VRT): The most direct public market beneficiary. Vertiv manufactures CDUs, precision cooling units, and power distribution for data centers. Its liquid cooling order backlog has grown substantially with AI-driven demand.
  • Eaton Corporation (ETN): Power management and PDU infrastructure — critical for the electrical side of high-density rack deployments.
  • Modine Manufacturing (MOD): Thermal management components, increasingly focused on data center cooling applications.
  • CoolIT Systems / Asetek (private/ASETEK.OL): Specialized cold plate and liquid cooling loop manufacturers with deep OEM relationships at Dell, HPE, and Lenovo.

Investment Angle

The liquid cooling buildout is a multi-year capital cycle with strong visibility — hyperscaler capex commitments for AI infrastructure are public and growing. Vertiv (VRT) is the highest-conviction public market expression: it has direct order flow from major hyperscalers, is benefiting from pricing power as supply is constrained, and trades at a valuation that does not yet fully reflect a $20B+ TAM by 2028.

Key risk: the DLC transition timeline is dependent on hyperscaler new-build vs. retrofit decisions. Existing air-cooled facilities will not be immediately replaced — meaning near-term DLC demand is concentrated in new greenfield construction, which is geographically and counterparty concentrated.


This research is for informational purposes only and does not constitute investment advice. Intermarket Universe does not hold positions in any securities mentioned unless disclosed. All estimates are the author's own analysis derived from public information.