NVIDIA / OpenAI: $100B Partnership to Deploy 10GW of AI Infrastructure

by Brad Gastwirth Global Head of Research and Market Intelligence

Executive Summary

NVIDIA and OpenAI have announced a strategic partnership to build out at least 10 gigawatts of AI infrastructure by 2026 and beyond. NVIDIA will invest up to $100 billion, phased against deployment milestones, with the first gigawatt of systems expected in 2H26 on NVIDIA’s Vera Rubin platform.

This is a clear signal that OpenAI is planning for exponential model scale and NVIDIA is stepping into a more vertically integrated role and not just supplying GPUs, but also committing capital and infrastructure alignment. While this raises execution and regulatory risks, it underscores NVIDIA’s intent to cement itself as the indispensable backbone of generative AI compute.

Strategic Context

For NVIDIA: Expands role from hardware vendor to infrastructure partner, ensuring OpenAI remains GPU-centric.

For OpenAI: Secures long-term access to cutting-edge compute at scale, reducing reliance on hyperscale clouds alone.

For Investors: Creates visibility into continued hyperscale demand for GPUs and surrounding infrastructure, but increases NVIDIA’s financial and operational exposure.

Supply Chain Implications

The scale of a 10GW build-out has far-reaching effects across the semiconductor and infrastructure supply chain:

Semiconductors / Advanced Packaging

CoWoS Substrates: Demand for advanced packaging (TSMC CoWoS, Samsung I-Cube) will remain bottlenecked. Suppliers such as Unimicron, Kinsus, and Ibiden are positioned to benefit.

HBM Memory: OpenAI’s training scale requires HBM3/4 at massive volumes. Beneficiaries include SK Hynix, Samsung, Micron. Tightness likely persists through 2026, sustaining elevated pricing.

GPU Production: TSMC will be stretched to deliver wafer starts at N3/N2 nodes for NVIDIA GPUs. Wafer allocation remains a critical constraint.

Servers / ODMs / Components

ODM/OEM Builds: Expect outsized demand for Quanta, Wistron, Foxconn, Supermicro. AI servers will need advanced cooling (liquid immersion, cold plates), creating pull-through for thermal and connector vendors (Aavid, Molex, Amphenol).

Networking: 10GW implies high-speed interconnect scale. Broadcom, Marvell, Arista stand to benefit from optical/ethernet upgrades; demand for Infiniband/Ethernet switches remains elevated.

Power & Datacenters

Energy Demand: 10GW ≈ powering 8–10 million homes. Requires grid partnerships, renewable PPA contracts, and siting near cheap/abundant power. Expect utility capex tailwinds.

Data Center Builders: Firms like Equinix, Digital Realty, as well as greenfield builders in North America and Europe, will be critical partners.

Cooling / Infrastructure: Scale creates pull-through for HVAC, liquid cooling, water treatment (Veolia, Trane, Johnson Controls).

Potential Risks

Bottlenecks: HBM, CoWoS substrate, and GPU wafer allocation remain the tightest links.

Lead Times: Server/ODM supply could slip if component ecosystems can’t keep pace.

Capex & Financing: Up-front infrastructure spend may crowd out smaller buyers, further stratifying access to compute.

Company Takeaways

Bullish for NVIDIA’s ecosystem: Ensures multi-year visibility for GPUs, substrates, memory, and networking vendors.

Positive for suppliers with bottleneck tech: HBM (SK Hynix, Micron), CoWoS (Unimicron, Kinsus), advanced networking (Broadcom, Arista).

Risks: Execution slippage, regulatory hurdles (export controls, energy), and sustainability scrutiny could temper near-term upside.

What to Watch

Ramp of the first 1GW Vera Rubin system in 2H26.

Allocation of HBM supply among NVIDIA, AMD, and hyperscalers.

Expansion of CoWoS/substrate capacity at TSMC and suppliers.

Data center siting announcements (North America vs overseas).

Regulatory response around power usage, environmental impact.

by Brad Gastwirth Global Head of Research and Market Intelligence