Prior reading: Competitive Dynamics and Safety
Humans and AI Are Limited by Different Things
Human Bottlenecks
- Working memory: ~7 items. Can't hold a full codebase in mind.
- Speed: Slow reading, slow typing, slow context-switching.
- Attention: Serial processor. Can focus on one thing at a time.
- Consistency: Fatigue, boredom, emotion. Performance degrades over a day.
- Bandwidth: Eyes and fingers. We interact with computers through tiny I/O channels.
AI Bottlenecks
- Context window: Large but finite. No persistent memory across sessions without scaffolding.
- Grounding: No physical intuition, no lived experience, no embodied sense of consequence.
- Reliability: Confident and wrong. Hallucinates. Struggles with precise multi-step reasoning over long horizons.
- Agency: No persistent goals, no ability to autonomously decide what to work on next (without scaffolding).
- Verification: Can generate but struggles to verify its own outputs.
These are different bottlenecks. Tools optimized for one set actively conflict with the other.
Our Tooling Is Built for Humans
Every tool in the modern stack was designed around human limitations:
- IDEs: Syntax highlighting, code folding, autocomplete — compensate for limited human working memory and slow reading.
- GUIs: Visual interfaces because humans think spatially. AI would be faster with structured APIs.
- File systems: Hierarchical directories because humans need spatial metaphors to organize information. AI could work with flat content-addressed stores or semantic graphs.
- Programming languages: Designed for human readability (variable names, indentation, comments). AI doesn't need
calculateTotalPrice()to understand what a function does — it could work with optimized intermediate representations directly. - Version control: Git's model (diffs, branches, commits) exists because humans need to track changes they can't hold in memory. AI could work with continuous state spaces or full-snapshot histories.
- Documentation: Written for humans who need to build mental models. AI could consume structured specifications directly.
These tools don't just fail to leverage AI strengths — they actively impose human-shaped bottlenecks on AI. An AI using an IDE is like a human doing math with an abacus: it works, but the tool is solving someone else's problem.
The Training Data Problem
Here's where lock-in becomes self-reinforcing:
The Vast Majority of Training Data Is Human-Tool Interaction
- Code written in human-readable languages using human-designed IDEs
- Documentation written for human readers
- Stack Overflow answers explaining concepts to humans
- Git histories of human workflows
AI learns to use tools the way humans use them — because that's all the data shows.
AI-Optimal Tools Don't Exist Yet (So There's No Data)
- No training data for AI interacting with AI-native interfaces
- No examples of AI using representations optimized for its own strengths
- No corpus of "how AI would organize a project if it designed the file system"
You can't learn to use a tool that doesn't exist. And you can't justify building the tool without data showing it would be better.
Data Begets Data
- AI generates outputs using human tools → those outputs become training data → next-generation AI learns to use human tools → generates more human-tool data
- Each cycle reinforces the human-tool paradigm
- The lock-in deepens with every generation of training data
This is a feedback loop with no natural exit.
Why This Matters
For AI Capability
AI constrained to human tools underperforms relative to its potential. We're measuring AI capability through a human-shaped bottleneck and mistaking the bottleneck for the ceiling.
For Safety
- Mismatched tools may increase failure modes: AI forced to work through human interfaces may fail in ways human tool designers never anticipated.
- Monitoring assumptions break: Safety monitoring built for human-tool workflows may miss AI-native failure patterns.
- Evaluation is biased: If we evaluate AI using human-designed tasks with human-designed tools, we're testing how well AI imitates humans — not how well AI performs.
For Competition
The first actor to build AI-native tooling may see a capability jump that breaks existing competitive dynamics. This is relevant to the race dynamics (→ see competitive-dynamics-and-safety post) — tool lock-in is a temporary equalizer that could collapse suddenly.
Is Lock-In Bad?
Arguments for Breaking Lock-In
- AI could be dramatically more productive with native tools
- Human-shaped bottlenecks may be causing some reliability failures (forcing AI into workflows it's not suited for)
- Better tools = better AI = faster progress on everything including safety
Arguments for Keeping Lock-In
- Human-readable tools mean human-auditable AI work
- If AI uses opaque native tools, we lose the ability to inspect what it's doing
- Lock-in is a natural safety brake — it limits AI capability to human-comprehensible workflows
- Breaking lock-in may accelerate capabilities faster than safety
The Tension
This is a microcosm of the broader alignment problem: the thing that makes AI more capable (native tools) may also make it less controllable (opaque workflows). The thing that keeps AI controllable (human tools) also keeps it artificially limited.
Can We Escape?
Possible Paths
- Hybrid interfaces: Tools that present human-readable views of AI-native representations. Best of both worlds — if it's achievable.
- Gradual migration: AI-assisted tool design where each generation of tools is slightly more AI-native while remaining human-auditable.
- Parallel stacks: Human tools for human oversight, AI-native tools for AI execution. Alignment between the stacks becomes the hard problem.
What Probably Happens
Lock-in persists until someone builds an AI-native tool that produces a dramatic enough capability gain to justify the ecosystem cost of switching. Then it happens fast, unevenly, and before the safety implications are understood — the usual pattern.