Constraint Capital #4 - Apr 2026
Constraint Capital Issue #4 breaks down AI Factory announcements across NVIDIA GTC 2026 and TERAFAB's 'not going to just do conventional compute'
In the history of industrial revolutions, we didn’t just invent the engine; we built the factory. March 2026 has officially marked that same transition for the digital age. We have moved past the era of singular crafted AI models and entered the era of the AI Factories.
Demand for intelligence has effectively outstripped our ability to house it. In this fourth edition of Constraint Capital, we set the stage of the dual seismic events of this month: NVIDIA’s GTC 2026 /YouTube/ and the joint SpaceX/Tesla TERAFAB announcement /SpaceX on X/.
As NVIDIA CEO Jensen Huang recently declared, “I love constraints... because in a world of constraint, you have no choice but to choose the best.” /Tom’s Hardware/
If 2025 was about the ‘what’ of AI, 2026 is about the ‘where’ and ‘with what.’ We are mapping a future where the bottlenecks are no longer just in the code, but in the very atoms required to stand up a factory.
Priority 1: The “Triple Wall” (Land, Power, and Shell)
Huang identifies these as the ultimate gatekeepers. When land, power, and shell are constrained, an AI firm cannot afford to experiment with arbitrary hardware.
The Flight to Quality: Because you cannot easily expand a 1-gigawatt footprint, you must populate it with the highest-performing silicon to maximize revenue-per-watt. As Huang notes, “If I choose poorly, my revenues are affected... they can’t choose poorly.”
The TERAFAB Pivot: Musk’s move to orbital compute is the ultimate admission that the terrestrial “Triple Wall” has become impenetrable. 80% of Terafab’s output is heading for space specifically to bypass the land and power constraints of Earth.
Priority 2: The “Atoms” (Wafers, CoWoS, and HBM)
The physical production of chips remains the most immediate bottleneck. NVIDIA has moved from a just-in-time to a “total-security” model.
Secured Supply: Huang has confirmed NVIDIA has “all the memories (HBM), all the wafers, and all the CoWoS (packaging).” By securing the entire DRAM and TSMC production lines for years in advance, NVIDIA has turned a component shortage into a competitive moat.
Money is the most obvious challenge for Elon Musk’s chip venture. To build 1 TW of AI silicon per year, Elon Musk’s TeraFab will need to process the equivalent of 22.4 million Rubin Ultra GPU wafers per year, 2.716 million Vera CPU wafers per year, and 15.824 million HBM4E wafers per year, according to estimates from premier semiconductor analysis firm Bernstein. To do so, TeraFab will need from 142 to 358 fabs, the report claims.
From “Analyzing Elon Musk’s TeraFab — A step towards Tesla and SpaceX’s partial vertical integration, or an unattainable dream?” /Tom’s Hardware/
The 2% Problem: Musk’s TERAFAB is a response to this exact constraint. If NVIDIA has secured the atoms, Tesla/SpaceX have no choice but to build their own recursive loop fab to bypass the global supply ceiling.
Priority 3: The “Unseen” Essentials (Copper, MLCCs, and Cables)
The most overlooked constraints are the passive components that hold the factory together.
Everything from Copper to Capacitors: Huang highlighted that NVIDIA has secured everything down to multilayer ceramic capacitors (MLCCs), connectors, and cables.
The Infrastructure Burden: The transition to GTC 2026’s 800V DC power architectures and liquid-to-chip cooling requires a massive influx of specialized copper and thermal components that are currently in a state of fantastic scarcity for those who own the supply.
All content published on this newsletter is based on public information and independent research. Opinions are authors own and have been sanitized through AI engines which can make mistakes. This newsletter is not financial advice, and readers should always do their own research before investing in any security.
Among the most important talks at GTC was Jeff Dean and Bill Daly exploring “the critical intersections of hardware innovation, systems scaling, and algorithmic advancement needed to propel AI into the 2026–2030 era of agentic systems, ultra-low-latency reasoning, and energy-efficient scaling.” Rewatch below.
While the TERAFAB announcement sparked speculation about non-conventional AI compute…
So what could this unconventional compute be? Let’s dive in…



