The Quiet Universe: ASI and the Fermi Paradox

Prior reading: Decision Theory for AI Safety | Competitive Dynamics and Safety A version of this argument was originally posted on LessWrong. I've reworked it here with some additional context and a less formal tone. I'm a computer scientist, not an astrophysicist — corrections welcome. The Problem The Fermi paradox asks a simple question: given the size and age of the universe, where is everyone? Standard answers include: life is rare (the Great Filter), civilizations destroy themselves before going interstellar, the distances are too vast, or everyone is hiding from everyone else (the Dark Forest, from Liu Cixin's Three-Body Problem). ...

March 8, 2026 · 11 min · Austin T. O'Quinn

Stable Equilibria of AGI — and AI Rights as a Solution to Power

Prior reading: Game Theory for AI Safety | Decision Theory for AI Safety | Competitive Dynamics The Question If AGI is developed, what stable configurations could the world settle into? Not all equilibria are equally survivable. Possible Equilibria Many Competing Systems Multiple AGI systems operated by different actors (nations, companies). No single dominant system. Closest to today's trajectory. Stability: Moderate. Competition continues. Arms race dynamics persist. Risk of conflict, but also checks and balances. ...

March 1, 2026 · 3 min · Austin T. O'Quinn

Game Theory for AI Safety

Why Game Theory Matters for AI Safety Most AI safety problems are not single-agent optimization problems. They're multi-agent strategic interactions: Nations deciding whether to regulate (→ see competitive-dynamics post) Companies deciding how much to invest in safety Models interacting with users, other models, and themselves across time Researchers deciding what to publish Game theory is the math of strategic interaction. If you don't know it, you can't reason precisely about any of these. ...

March 26, 2025 · 5 min · Austin T. O'Quinn
.