Why AI Policy Is Hard Even When Everyone Agrees

Prior reading: Competitive Dynamics and Safety | P-Hacking and Benchmarks The Premise The competitive-dynamics post explains why motivation for AI safety is lacking. This post assumes the opposite: everyone is motivated. Every nation, every company, every researcher genuinely wants to regulate AI well. It's still incredibly hard. Here's why. The Object You're Regulating Keeps Changing Capability Jumps Are Unpredictable Regulation assumes you can define what you're regulating. But AI capabilities change discontinuously. A model goes from "can't do X" to "can do X fluently" between training runs, sometimes between scale thresholds no one predicted. ...

February 18, 2026 · 11 min · Austin T. O'Quinn

Why AI Safety Is Hard to Sell: Competitive Dynamics and the Race to the Bottom

Prior reading: Game Theory for AI Safety | The AI Threat Landscape The Core Problem AI safety is short-term costly and long-term valuable. Every actor faces pressure to defect. International Competition If Country A regulates AI development and Country B doesn't, Country B gets capabilities first. The perceived cost of falling behind in AI is existential for national security and economic competitiveness. No country wants to be the one that regulated itself out of the race. ...

February 11, 2026 · 3 min · Austin T. O'Quinn
.