Sep 07, 2025
After thousands of AI-assisted commits, I discovered something surprising: the best AI framework is the one that doesn't feel like a framework at all.
I write markdown files that tell AI what to do. I use bash commands to make things happen. That's it.
gh issue create --title "Add user auth" --body "..."
gh pr create --title "feat: implement OAuth"
psql postgresql://postgres:postgres@127.0.0.1:54322/postgres
yarn test:e2e
No YAML orchestration platforms. No JSON schema configurations. No enterprise AI frameworks. Just the same tools I've used for a decade, organized systematically.
Last month, I spent two hours debugging a feature that worked flawlessly. The code was clean. The tests passed. There was just one problem.
The AI had perfectly implemented the wrong thing.
It had built exactly what I asked for, not what I needed. That's when I realized: AI doesn't need complex frameworks. It needs clear requirements.
Now every piece of work traces back to three files:
docs/specs/
├── requirements.md # What must be true
├── design.md # How we'll do it
└── tasks.md # Who does what when
Each requirement gets an ID (R#1, R#2...). Each design decision references requirements (D#1 implements R#3). Each task traces back (T#1 delivers D#2).
This isn't bureaucracy. It's breadcrumbs.
When I'm five layers deep debugging, I can trace back to the original requirement. When Claude is helping me code, it cites exactly which requirement it's implementing. The AI can't bullshit its way through numbered requirements. Either T#1 delivers D#2 or it doesn't.
I got tired of typing the same GitHub CLI commands. So I wrapped them in markdown:
---
allowed-tools: Bash, Read, Write
description: Create a GitHub issue with proper structure
---
# Create Issue: $ARGUMENTS
Running: gh issue create --title "$1" --body "..."
Now /create-issue
does all the boilerplate. /pr
creates PRs with the right format. /weekly-plan
generates my Monday morning todo list.
The compound effect is real. The first slash command felt silly. The tenth one felt smart. Now I have thirty, and they've saved me weeks of typing.
Before git worktrees, my flow looked like this: stash changes, checkout branch, remember what I was doing, lose context, stash pop the wrong thing, cry.
Now:
~/src/myapp/ # main branch
~/src/myapp-feat-auth/ # feature branch
~/src/myapp-fix-bug/ # hotfix branch
Each directory is a full checkout. I can run tests in one while coding in another. Claude can read files from main while I'm editing a feature branch. No stashing. No context loss. No tears.
The secret: when I create a new worktree, I immediately make it ready to run:
git worktree add ../myapp-feat-auth feat/auth
cd ../myapp-feat-auth
cp ../myapp/.env.local .
cp -r ../myapp/certs .
yarn install
This setup automation seems tiny. But it removes friction at exactly the moment when friction kills momentum.
I scatter CLAUDE.md files throughout my codebase like breadcrumbs for the AI:
# CLAUDE.md — Spec Steering
Purpose: keep the model aligned with the three-file spec.
## IDs and traceability
- Requirements: R# (and N# for non-functional, A# for acceptance)
- Design items: D# that satisfy R#
- Tasks: T# that deliver R# via D#
Always cite the IDs you use.
Before CLAUDE.md files, I'd spend half my time correcting the AI's assumptions. Now it self-corrects. The files aren't magic - they're consistent context that travels with the code. It's like having a senior engineer looking over your shoulder, except the senior engineer is a markdown file that never gets tired or grumpy.
The scariest thing about AI-assisted database work?
It can generate perfectly valid SQL that does exactly the wrong thing.
Now my rule is simple: AI never writes database changes without first seeing the current state.
psql postgresql://postgres:postgres@127.0.0.1:54322/postgres
\d users # Check the actual schema
# Now AI can write the migration
It's measure twice, cut once - except the measurement is a schema query and the cut is a migration that could destroy everything.
After thousands of commits, these aren't rules I read in a book. They're scar tissue from real failures.
These guardrails aren't restrictions. They're what enable speed. When you know the build won't break, you move faster. When tests catch issues locally, you move faster.
Someone asked me what I do after reading about Claude Code Framework Wars. The article presents a menu: task management, guidance strategies, agent coordination, session management, tool access. Pick your scale. Choose your implementation.
I picked all of them:
But I implemented them using tools that have existed for decades. Markdown. Bash. Git.
The Framework Wars article says "AI works best when you give it structure." Absolutely right. The structure is what makes AI predictable and valuable. Without it, you're just gambling on each generation.
My implementation proves the thesis. The patterns work. The structure matters. The only choice is how you implement it.
Some developers will choose enterprise platforms. Others will build custom orchestration. I chose to use the tools I already had, systematically.
That's what I do. I'm fighting in the Framework Wars too - just with markdown files and bash scripts as my weapons of choice. And after thousands of commits, I can tell you: sometimes the best framework is the one that doesn't feel like a framework at all.