The Thesis
Oil was the commodity of the 20th century. Semiconductors are the commodity of the early 21st. AI — the capability itself, not just the hardware — is becoming the commodity that everything else depends on. And like oil before it, control of the AI supply chain is becoming a question of global security.
The AI Supply Chain
AI capability depends on a stack, and each layer has chokepoints:
Hardware
- Advanced chips: TSMC fabricates ~90% of the world's most advanced semiconductors. A single company, on a single island, in one of the most geopolitically contested regions on earth.
- Lithography: ASML is the sole manufacturer of EUV lithography machines. One Dutch company. One supply chain for the tool that makes the tools.
- Memory and interconnect: HBM (high-bandwidth memory) dominated by Samsung and SK Hynix. NVLink and custom interconnects from NVIDIA.
- Export controls: The US CHIPS Act and export restrictions on advanced GPUs to China are already weaponizing this supply chain.
Energy
- AI training and inference are energy-intensive. Data center power demand is growing faster than grid capacity in many regions.
- Access to cheap, reliable energy becomes a competitive advantage for AI development.
- Nations with energy abundance (or nuclear buildout capacity) have a structural advantage.
Data
- Training data is a resource that can be hoarded, poisoned, or embargoed.
- Synthetic data from existing models creates a recursive dependency — your data supply chain depends on your AI supply chain.
- Data sovereignty laws (GDPR, etc.) fragment the global data market.
Talent
- The number of people who can train frontier models is small — perhaps a few thousand worldwide.
- Immigration policy becomes AI policy. Visa restrictions are talent export controls.
Models and Weights
- Trained model weights are the "refined product" of the AI supply chain.
- They can be copied at near-zero marginal cost — unlike oil or chips.
- This makes them simultaneously easier to distribute and harder to control.
- Open-weight releases are irreversible proliferation events.
Control Points and Leverage
Each chokepoint is a potential point of control — and a potential point of failure:
- Hardware chokepoints: Whoever controls TSMC and ASML controls the physical substrate of AI. This is why Taiwan's geopolitical status has become an AI safety question.
- Energy chokepoints: AI development may concentrate in regions with energy surplus, creating new geopolitical dependencies.
- Regulatory chokepoints: Export controls, data sovereignty, and compute governance can throttle AI development — but only for actors subject to the regulation.
The Asymmetry
Physical supply chain control is powerful but slow. Digital supply chain control (model weights, algorithms, data) is fast but leaky. A nation can restrict GPU exports, but it can't un-release an open-weight model.
This asymmetry means:
- Hardware controls are effective for slowing development but not preventing it
- Software/algorithmic progress can route around hardware constraints (efficiency improvements, distillation, novel architectures)
- The window of hardware-based control is closing as algorithmic efficiency improves
Security Implications
- Single points of failure: The concentration of semiconductor manufacturing in Taiwan is a civilizational-scale single point of failure. Natural disaster, military conflict, or political disruption could halt global AI development.
- Weaponization of supply chains: Export controls are already being used as strategic tools. This will intensify.
- Proliferation: Once model weights are released or stolen, they can't be recalled. AI proliferation doesn't look like nuclear proliferation — it's faster, cheaper, and harder to detect.
- Dependency: Nations that depend on imported AI capability are strategically vulnerable in the same way that oil-importing nations were in the 20th century.
What This Means for Safety
AI safety discussions often assume a world where the main actors are a handful of US/UK AI labs. The supply chain picture is more complex:
- Safety standards set by one country may not apply to models trained elsewhere using different supply chains
- Hardware controls that slow frontier development may accelerate unsafe development on less controlled hardware
- The global distribution of AI capability determines the global distribution of AI risk