OpenAI × AMD: what it means (and what it doesn’t)

by Brad Gastwirth Global Head of Research and Market Intelligence

What was announced (today)

Multi-year supply deal: OpenAI commits to deploy 6 GW of AMD Instinct GPUs, beginning with ~1 GW in 2H26 on the MI450 generation.

Financial kicker: OpenAI receives warrants for up to ~10% of AMD (≈160M shares), contingent on delivery and share-price milestones.

Why this is not a “replacement” for Nvidia

Hedging + elasticity: OpenAI appears to be diversifying compute sources to de-risk supply, pricing, and roadmap timing. Public remarks and prior disclosures indicate OpenAI will keep using Nvidia while developing in-house silicon; the AMD deal seems additive, not substitutive.

Workload fit: AMD’s near-term strength looks skewed toward inference TCO and open tooling (ROCm7), while Nvidia retains the deepest platform and installed base for scaled training. Expect a dual-vendor strategy where specific models/phases are steered to the best price/perf at any point in time.

Timing: First 1 GW lands in late 2026; Nvidia capacity remains critical before then.

Supply-chain implications

HBM demand: The deal implies substantial HBM3E → HBM4 pull-through (capacity, stacks, bandwidth). Expect continued tightness and mix-up for leading suppliers (SK hynix, Micron, Samsung) as both Nvidia Rubin-class parts and AMD MI4xx absorb HBM4.

Advanced packaging (CoWoS): AMD MI3xx uses CoWoS-S today; MI4xx is expected to lean on larger CoWoS variants. TSMC is already ramping CoWoS from ~35k wpm (’24) toward ~70k wpm in ’25 and ~90–100k wpm by ’26, but allocation remains the bottleneck across vendors.

Racks / interconnect: Rack-scale builds and UALink/Ultra-Ethernet moves suggest non-GPU components optics, NICs, high-current VRMs, liquid-cooling hardware, PSUs, switch silicon see step-function demand alongside GPUs.

What this means for TSMC and other foundries

TSMC wins near-term: AMD’s MI350 is on a 3nm-class node and current Instinct parts use TSMC’s CoWoS packaging; MI450 timing (’26) likely keeps TSMC as the primary foundry/packaging partner and tightens N3 + CoWoS utilization.

Capacity mix shifts: With OpenAI’s 6-GW plan and Nvidia’s Rubin (’26) also expected on TSMC N3 + HBM4, TSMC’s advanced packaging remains the gating resource. Expect continued price/margin power for TSMC’s advanced nodes and OSAT partners tied to CoWoS.

Alt foundry chatter: There are periodic reports of AMD exploring Intel Foundry Services for select products; this could evolve into a long-term multi-foundry hedge, but nothing in today’s OpenAI deal changes the TSMC-centric reality for leading AI GPUs through 2026.

Strategic take (directional)

For OpenAI: This appears to lock in multi-gen supply, lower blended cost/TFLOP, and bargaining power versus any single vendor. Expect them to steer latency-sensitive inference and some fine-tuning toward AMD where ROCm and model kernels are most mature by ’26.

For AMD: This looks like a flagship demand anchor that could pull up broader ecosystem adoption (OEM racks, cloud SKUs), accelerate ROCm hardening, and support longer-dated HBM4 commitments.

For Nvidia: Not displaced as aggregate AI compute TAM is expanding. But procurement managers will have credible multi-source leverage in 2026–27 as AMD MI4xx ramps alongside Nvidia Rubin systems.

Watch-items for procurement

Kernel maturity + model parity (ROCm7 and beyond) on your target LLMs/agents by mid-2026.

HBM4 availability and CoWoS-L/S slotting vs. your delivery window.

Total rack TCO (cooling, optics, NICs, power distribution) vs. token/$ targets and not just GPU list price.

by Brad Gastwirth Global Head of Research and Market Intelligence