seL4 as a Foundation for Trustworthy AI Containment

Prior reading: Transistor Timing Side Channels | What Are Formal Methods? The Containment Problem If you have an AI system that might be dangerous, you want to run it in a box where it can't affect the outside world except through controlled channels. Simple in concept. The problem: how do you prove the box has no leaks? Software containment is hard because the attack surface is enormous. Operating systems have millions of lines of code, thousands of syscalls, and a long history of privilege escalation exploits. A sufficiently capable AI looking for a way out of a sandbox is running against software written by humans who couldn't anticipate all escape paths. ...

March 26, 2026 · 7 min · Austin T. O'Quinn

Transistor Timings as a Side Channel

Side Channels: The Basics A side channel is any unintended information leakage from a computation's physical implementation. You're not breaking the algorithm — you're watching the hardware run the algorithm and inferring secrets from physical observables. The classic side channels: Timing: How long a computation takes Power: How much energy it draws Electromagnetic emanation: What RF signals the chip emits Acoustic: What sounds the hardware makes (yes, really) Cache behavior: Which memory lines are accessed Transistor-Level Timing Why Timing Varies A transistor's switching time depends on physical conditions: ...

March 25, 2026 · 4 min · Austin T. O'Quinn

AI as the Commodity of the Future: Supply Chains, Control, and Global Security

The Thesis Oil was the commodity of the 20th century. Semiconductors are the commodity of the early 21st. AI — the capability itself, not just the hardware — is becoming the commodity that everything else depends on. And like oil before it, control of the AI supply chain is becoming a question of global security. The AI Supply Chain AI capability depends on a stack, and each layer has chokepoints: ...

March 24, 2026 · 4 min · Austin T. O'Quinn

Probabilistic Security: Great Against Accidents, Useless Against Attackers

Prior reading: Jailbreaking | Reachability Analysis The Setup Suppose a model has a catastrophic failure mode — some input that causes it to produce a truly dangerous output. And suppose the probability that a random prompt triggers this failure is $10^{-100}$. Is this safe? It Depends Entirely on the Threat Model Good-Faith User (Random Inputs) If your users are cooperative — they're trying to use the model correctly and might occasionally stumble into bad prompts by accident — then $10^{-100}$ is absurdly safe. No one will ever randomly type the one prompt in $10^{100}$ that breaks the model. The sun will burn out first. ...

February 4, 2026 · 3 min · Austin T. O'Quinn
.