Aug 11, 2025
I killed our weekly triage meeting last month. Three hours compressed to five minutes. But here's the thing—it took me six failed attempts to get there.
The breakthrough wasn't making the AI smarter. It was making the task more structured. This is what context engineering actually looks like—messy, iterative, and focused on constraints rather than capabilities.
Let me show you what it really takes to achieve a 36x productivity gain with AI. Spoiler: it's not about the AI at all.
Are you confident you're working on the right thing at this very moment? How do you know?
Most startups face a brutal choice: spend 3 hours weekly in soul-crushing triage meetings, or skip triage entirely and hope you're building the right features. Both options are expensive.
Here's my confession: I'm allergic to meetings. I literally ask, "What's the agenda? Can we cut this short?" I decline almost everything. I'm the office curmudgeon about this.
So obviously, I had to be the one to automate triage.
Let's do the real math.
Corporate thinking: 8 people × $150/hour × 3 hours = $3,600
Startup reality: 8 people × $1,000/hour of value creation × 3 hours = $24,000+
Here's where that $1,000/hour comes from: SaaS benchmarks put revenue per employee at $130k–$250k depending on scale OpenView, SaaS Capital). Public cloud companies currently trade at ~8-9× revenue multiples (BVP Cloud Index). Back-of-the-envelope: 8.5× multiple on $200k revenue per employee, spread over 2,000 work hours, comes to ~$850/hour of enterprise value. Top-quartile teams push past $1,200/hour.
So an 8-person, 3-hour triage meeting? That's $24k–$30k of foregone value creation. Every. Single. Week.
But here's the real question: Does your CEO even come to triage? How do you ensure alignment with strategy? The dev manager who attended? Does he remember what the CEO said on Friday after his weekend at the cottage?
Before I tell you about my six failed attempts, let me share what Maria D'Angelo taught me years ago about decision fatigue.
I'd complained to Maria once about Lever only letting me review 25 resumes at a time. "That's so inefficient," I said. "I could power through 100 if they'd let me."
Maria, with her PhD in cognitive psychology, just smiled. "Cameron, after 25 decisions, your quality drops to near zero. You think you're being efficient, but you're just making bad decisions faster."
The research on cognitive fatigue is insane. Judges give favorable rulings 65% of the time at the start of sessions. By the end? Nearly 0%. We literally get worse at decisions the more we make.
This insight would save my triage automation later. But first, I had to fail five times.
I asked Claude Code for help, pointed it at the docs, and built a simple slash command:
/triage
The AI would fetch issues and prioritize them. Except it kept missing issues. Out of 25 items, it would process maybe 15. Sometimes 18. Never all 25.
The model was being lazy. Or creative. It would skip around, summarize, occasionally hallucinate issues that weren't even in triage.
But the bigger problem? It had no context for our strategic goals. Like having an intern who'd never attended a strategy meeting making critical decisions.
I tried markdown drafts. I tried better prompting. I discovered Claude Code's "bang syntax" through a tweet from Toby at Shopify—it loads context before the AI starts processing. Each approach got slightly better, but the AI still couldn't reliably process every issue. Even with all 25 issues clearly in context, it would skip 3-4 of them.
"Oh yes, let me add that one," it would say when prompted.
Maddening.
Here's when things clicked. What if the problem wasn't the model's capability, but the format?
CSV. A spreadsheet. The most boring possible format.
But here's the thing about CSV: you can't skip a row. Row 17 exists between row 16 and row 18. The model HAS to fill every cell or leave it obviously blank.
issue_number,title,priority_recommendation,assignee_recommendation,rationale
3847,"Fix auth timeout","","",""
3848,"Update docs","","",""
Empty cells screaming to be filled. The model couldn't skip them without it being obvious.
Suddenly, 100% coverage. Every. Single. Issue.
We went from ~88% coverage to 100%. Not through better prompting. Through better structure.
We're a pre-PMF, default-dead startup. EVERY decision about what to work on is mission critical.
I gave our strategic advisor agent (trained on CEO transcripts and strategy docs) direct access to edit the CSV:
issue_number,priority_recommendation,strategic_feedback
3471,P3,"BRUTAL TRUTH: Workspace naming consistency does NOT move users from 'would use for free' to 'take my money'."
3528,P1,"CITATION REJECTION BLOCKER: This directly impacts evidence weighting - core conviction deliverable."
This was like having our CEO review every triage decision. Except available 24/7 with brutal honesty about strategic alignment.
Remember Maria's insight about decision fatigue? When I hit GitHub's API limit (100 items), my instinct was to implement pagination. Then I remembered her lesson.
I set the default to 25 items:
BATCH_SIZE=${1:-25} # Cognitive load management
We can now triage more frequently in smaller batches. Decision quality stays high. The 5-minute sessions are energizing, not exhausting.
The constraint became the feature.
Let's talk about what 36x really means:
Before:
After:
Do the math: 24 person-hours ÷ 0.083 person-hours = 289x reduction in person-hours. But I promised you 36x, and that's what matters—3 hours down to 5 minutes is a 36x time compression.
The real gain? $30,000 weekly opportunity cost returned to building instead of talking. $1.5M per year.
The entire workflow is one command:
/triage
The command loads issues into CSV, runs parallel analysis, applies strategic feedback, and presents 25 items for review. I spend 5 minutes checking the AI's work, not doing the work.
When I say "execute," it updates GitHub via GraphQL, assigns developers, and documents every decision:
# Simplified execution
mutation = f'''mutation {{
updateProjectV2ItemFieldValue(
input: {{ projectId: "{id}", itemId: "{item}", ... }}
) {{ projectV2Item {{ id }} }}
}}'''
comment_body = f'''## 🔍 Triage Analysis
**Priority**: {priority}
**Strategic Context**: {strategic_feedback}
_Triaged via CSV-based systematic analysis_'''
I spent six iterations trying to make the AI more capable. Better prompting, more context, smarter agents.
The 36x improvement came from embracing constraints:
This is context engineering: not making AI smarter, but understanding the shape of the problem and working with constraints rather than against them.
/triage
isn't done. It's a product that evolves. There are likely 5-6 more improvements coming. But the core insight remains: the constraint isn't the enemy. The constraint is the feature.
Think about your worst recurring meeting. That meeting probably costs more than you think—not in salaries, but in opportunity cost. What could those people be building instead?
Now imagine compressing it 36x. Not by making the AI smarter, but by understanding why your current approach fights human nature—and stopping that fight.
Thanks to Maria D'Angelo for drilling cognitive load awareness into my brain, Toby from Shopify for the bang syntax insight, and GitHub for the "bug" that became our best feature.