The Bootstrap Paradox: Building the Thing That Builds the Thing
This week I built an AI system that writes its own modules, fought Node version hell across three machines, and discovered why cascading context beats monolithic prompts every time.
The Bootstrap Paradox: Building the Thing That Builds the Thing
Welcome to the first issue of The Builder's Log. Every week I share what's actually happening in my lab... not theory, not hype, just the real work of building agentic AI systems.
This week was a big one. Let's get into it.
This Week in the Lab
I spent the last seven days neck-deep in three parallel projects that all share one theme: how do you build AI systems that get smarter over time without you babysitting them?
The short answer? You don't build one big thing. You build small things that build bigger things. I've been calling this the "bootstrap paradox" because every layer needs the previous layer to exist first.
Here's what that looked like in practice this week.
Deep Dive: Cascading Context Architecture
The biggest breakthrough was getting a cascading memory system working across Claude Code. Instead of one massive prompt file, I built a hierarchy:
~/.claude/CLAUDE.md (global preferences)
~/Projects/CLAUDE.md (project-level context)
./CLAUDE.md (repo-specific rules)
All three files merge at session start. The AI gets global preferences plus project context plus the specific rules for whatever repo it's in.
Why does this matter? Because a monolithic prompt file is a nightmare to maintain. It gets long. It gets stale. Different projects need different instructions but share common DNA.
The cascading approach means I write each rule exactly once, at the right level. My "never use em dashes" preference lives at the global level. My "use Bun instead of Node" rule lives at the project level. My "this API uses snake_case" rule lives at the repo level.
The result? Every new Claude Code session starts with exactly the context it needs. Nothing more, nothing less.
The pattern works for any AI system. If you're building agents that operate across multiple domains, stop trying to cram everything into one context window. Layer it. Let each layer override the one above it when needed.
Tools and Techniques
Three things that saved me hours this week:
1. The nvm Shell Fix
If you use Claude Code (or any AI coding tool) with Node.js projects, you've probably hit this: the AI starts a shell, but nvm isn't loaded, so it grabs the wrong Node version. Native modules compile against the wrong version. Everything breaks.
The fix is embarrassingly simple:
source ~/.nvm/nvm.sh && nvm use && npm run dev
Put that pattern in your CLAUDE.md and your AI will never grab the wrong Node version again. I lost two hours to this before writing it down. Don't be me.
2. Docker Network Bridging
When you're running multiple Docker Compose projects that need to talk to each other (like a Caddy reverse proxy and your app), create an external network:
docker network create web
Then reference it in both compose files. Your proxy can reach your app by container name. No hardcoded IPs. No host networking hacks.
3. Screenpipe for Activity Tracking
I built a Python script that queries Screenpipe's SQLite database directly to gather my coding activity. It watches which apps I use, what technologies appear on screen, and what errors I hit. Then it generates a structured report.
This newsletter was literally written from that data. The AI reads what I actually did this week and helps me write about it. No manual note-taking required.
What I Learned
The biggest lesson this week: organizational patterns matter more than raw capability.
I watched the same AI model (Claude Opus 4.6) perform dramatically differently based on how I organized its context. With a flat, everything-in-one-file approach, it would lose track of project-specific rules halfway through a session. With the cascading approach, it nailed them every time.
This applies beyond AI. When I was running operations at Amazon, the teams that outperformed weren't the ones with the best individual contributors. They were the ones with the clearest organizational structure. Everyone knew their lane. Information flowed predictably.
Same principle, different domain. Your AI's architecture matters more than its model size.
Try This
Pick one project where you use AI assistance. Create a simple two-level context system:
- A global file with your universal preferences (coding style, communication tone, tools you prefer)
- A project file with specific rules for that codebase
Load both at the start of every session. Track whether the AI follows your preferences more consistently.
I bet you'll see the difference in the first session. And once you do, you'll never go back to one-size-fits-all prompts.
That's it for this week. Next time I'll dig into the agent delegation patterns I've been building... how to decide which tasks go to specialized agents versus generalist ones, and why getting that wrong costs you more than just tokens.
Until then, keep building.
Sean Patterson CrossGen AI