Stable Equilibria of AGI — and AI Rights as a Solution to Power

Prior reading: Game Theory for AI Safety | Decision Theory for AI Safety | Competitive Dynamics The Question If AGI is developed, what stable configurations could the world settle into? Not all equilibria are equally survivable. Possible Equilibria Many Competing Systems Multiple AGI systems operated by different actors (nations, companies). No single dominant system. Closest to today's trajectory. Stability: Moderate. Competition continues. Arms race dynamics persist. Risk of conflict, but also checks and balances. ...

March 1, 2026 · 3 min · Austin T. O'Quinn

Why AI Policy Is Hard Even When Everyone Agrees

Prior reading: Competitive Dynamics and Safety | P-Hacking and Benchmarks The Premise The competitive-dynamics post explains why motivation for AI safety is lacking. This post assumes the opposite: everyone is motivated. Every nation, every company, every researcher genuinely wants to regulate AI well. It's still incredibly hard. Here's why. The Object You're Regulating Keeps Changing Capability Jumps Are Unpredictable Regulation assumes you can define what you're regulating. But AI capabilities change discontinuously. A model goes from "can't do X" to "can do X fluently" between training runs, sometimes between scale thresholds no one predicted. ...

February 18, 2026 · 11 min · Austin T. O'Quinn
.