You found the hidden section. These are less-polished thoughts on AI security — the kind of thing that's worth writing down but maybe not worth putting in the nav bar.
seL4 as a Foundation for Trustworthy AI Containment
Prior reading: Transistor Timing Side Channels | What Are Formal Methods? The Containment Problem If you have an AI system that might be dangerous, you want to run it in a box where it can't affect the outside world except through controlled channels. Simple in concept. The problem: how do you prove the box has no leaks? Software containment is hard because the attack surface is enormous. Operating systems have millions of lines of code, thousands of syscalls, and a long history of privilege escalation exploits. A sufficiently capable AI looking for a way out of a sandbox is running against software written by humans who couldn't anticipate all escape paths. ...