Jul 01, 2025
TL;DR: Claude Code hooks let you inject shell commands at key points in your development workflow. I built two simple hooks that enforce branch protection and run code quality checks automatically. It's like having a pair programming partner who never gets tired of reminding you about the basics.
I've pretty much switched to Claude Code exclusively for development work. The story of how I got here is a bit circuitous: a friend went to a conference where Factory AI was announced and asked me to kick the tires (I've been a Cursor enthusiast since day one). I tried Factory, liked some things about it, but it sparked an idea: I had tried Claude Code once before but it didn't stick. Maybe I should give it another shot?
This time, I was hooked. There's something about the contextual awareness and fluid conversation that makes the entire development experience feel more natural than traditional AI coding assistants.
So when I saw Claude Code hooks hit the front page of Hacker News today, I immediately knew I wanted to build a couple. The concept struck me as brilliant: shell commands that execute at specific points during Claude's workflow, like middleware for your AI pair programming session.
Our team has some basic development hygiene rules that are easy to forget in the flow of coding:
These aren't complex problems, but they create friction. You either interrupt your flow to remember the checklist, or you forget and get reminded by CI failures or code review comments.
I built two hooks that automate exactly these pain points. The beauty of Claude Code hooks is that they run automatically at the right moments. No additional mental overhead required.
The first hook prevents the annoying scenario of accidentally committing to main:
This runs on `PreToolUse` for any `Bash` command. When Claude tries to run `git commit` or `git push main`, the hook blocks it and provides helpful guidance on the proper workflow. The key insight here is using Claude Code's structured JSON output—instead of just failing with an error, it provides a helpful `reason` that gets displayed to both Claude and me.
The second hook ensures code quality checks happen automatically after file edits:
This runs on `PostToolUse` after any `Write`, `Edit`, or `MultiEdit` command. It runs TypeScript checking and linting, blocking if there are errors. This finally brings Claude Code up to parity with Cursor where the agent edits a file, gets immediate feedback, and can react. The structured JSON output means Claude gets clear feedback about what went wrong and can fix issues in the same conversation.
The hooks are configured in `.claude/settings.json` with simple matchers:
The matcher syntax is elegant—you can target specific tools with regex patterns, and the hooks run in the order they're defined.
The real value isn't in the individual checks—you could run these manually. It's in the automatic, contextual feedback loop. When Claude edits a file and immediately gets feedback about type errors, it can fix them in the same conversation. When it tries to commit to main and gets blocked, it naturally switches to creating a feature branch.
It's like having an always-on pair programming partner who remembers the boring stuff so you can focus on the interesting problems.
What surprised me most was how seamlessly these hooks integrate into the development flow. They don't feel like interruptions—they feel like extensions of Claude's capabilities. When a hook blocks an action, Claude treats it as normal feedback and adjusts accordingly.
This got me thinking about what other workflow automation might be possible. Secret scanning before commits? Automatic test running for changed files? Smart deployment checks? The hooks system feels like it has room to grow into something really powerful.
The hooks I built solve immediate pain points, but they hint at something larger: the possibility of encoding your entire development workflow as a set of guardrails and automations that travel with your codebase. Instead of documenting processes in a README that everyone forgets to read, you can embed them as executable constraints that enforce themselves.
When your AI pair programming partner knows and enforces your team's standards automatically, it raises the floor for code quality across your entire team.
I'm curious what hooks the community will build. The system is flexible enough to support everything from simple linting to complex CI/CD integration. The real test will be whether hooks evolve into genuinely useful workflow automation or just become another configuration layer to maintain.
For now, I'm happy with these two simple hooks. They solve real problems I was experiencing and make my Claude Code sessions feel more polished and reliable.
What hooks would make sense for your workflow? I'd love to see what problems other teams are solving with this system.