Tool Lock-In: Why AI Is Stuck Using Human Tools

Prior reading: Competitive Dynamics and Safety Humans and AI Are Limited by Different Things Human Bottlenecks Working memory: ~7 items. Can't hold a full codebase in mind. Speed: Slow reading, slow typing, slow context-switching. Attention: Serial processor. Can focus on one thing at a time. Consistency: Fatigue, boredom, emotion. Performance degrades over a day. Bandwidth: Eyes and fingers. We interact with computers through tiny I/O channels. AI Bottlenecks Context window: Large but finite. No persistent memory across sessions without scaffolding. Grounding: No physical intuition, no lived experience, no embodied sense of consequence. Reliability: Confident and wrong. Hallucinates. Struggles with precise multi-step reasoning over long horizons. Agency: No persistent goals, no ability to autonomously decide what to work on next (without scaffolding). Verification: Can generate but struggles to verify its own outputs. These are different bottlenecks. Tools optimized for one set actively conflict with the other. ...

March 22, 2026 · 5 min · Austin T. O'Quinn

AI Cracking a Research Paper: Gaming Peer Review

Prior reading: P-Hacking and Benchmarks | The Specification Problem The Setup AI writing assistants can now help researchers improve papers. Sounds good. But "improve" means "optimize for acceptance" — and acceptance is determined by reviewers with biases, time constraints, and heuristics. What the AI Optimizes Framing: Present results in the most favorable light Buzzwords: Match the vocabulary reviewers respond to Structure: Follow templates that reviewers associate with quality Claims: Calibrate confidence to what reviewers will accept without pushing back Related work: Cite the likely reviewers' papers None of this is about being more true. It's about being more accepted. ...

March 15, 2026 · 2 min · Austin T. O'Quinn

The Quiet Universe: ASI and the Fermi Paradox

Prior reading: Decision Theory for AI Safety | Competitive Dynamics and Safety A version of this argument was originally posted on LessWrong. I've reworked it here with some additional context and a less formal tone. I'm a computer scientist, not an astrophysicist — corrections welcome. The Problem The Fermi paradox asks a simple question: given the size and age of the universe, where is everyone? Standard answers include: life is rare (the Great Filter), civilizations destroy themselves before going interstellar, the distances are too vast, or everyone is hiding from everyone else (the Dark Forest, from Liu Cixin's Three-Body Problem). ...

March 8, 2026 · 11 min · Austin T. O'Quinn

Stable Equilibria of AGI — and AI Rights as a Solution to Power

Prior reading: Game Theory for AI Safety | Decision Theory for AI Safety | Competitive Dynamics The Question If AGI is developed, what stable configurations could the world settle into? Not all equilibria are equally survivable. Possible Equilibria Many Competing Systems Multiple AGI systems operated by different actors (nations, companies). No single dominant system. Closest to today's trajectory. Stability: Moderate. Competition continues. Arms race dynamics persist. Risk of conflict, but also checks and balances. ...

March 1, 2026 · 3 min · Austin T. O'Quinn

Why AI Policy Is Hard Even When Everyone Agrees

Prior reading: Competitive Dynamics and Safety | P-Hacking and Benchmarks The Premise The competitive-dynamics post explains why motivation for AI safety is lacking. This post assumes the opposite: everyone is motivated. Every nation, every company, every researcher genuinely wants to regulate AI well. It's still incredibly hard. Here's why. The Object You're Regulating Keeps Changing Capability Jumps Are Unpredictable Regulation assumes you can define what you're regulating. But AI capabilities change discontinuously. A model goes from "can't do X" to "can do X fluently" between training runs, sometimes between scale thresholds no one predicted. ...

February 18, 2026 · 11 min · Austin T. O'Quinn

Why AI Safety Is Hard to Sell: Competitive Dynamics and the Race to the Bottom

Prior reading: Game Theory for AI Safety | The AI Threat Landscape The Core Problem AI safety is short-term costly and long-term valuable. Every actor faces pressure to defect. International Competition If Country A regulates AI development and Country B doesn't, Country B gets capabilities first. The perceived cost of falling behind in AI is existential for national security and economic competitiveness. No country wants to be the one that regulated itself out of the race. ...

February 11, 2026 · 3 min · Austin T. O'Quinn
.