CAMERON WESTLAND

It's not 10x. It's 36x - this is what it looks like to kill a $30k meeting with AI

Aug 11, 2025

I killed our weekly triage meeting last month. Three hours compressed to five minutes. But here's the thing—it took me six failed attempts to get there.

The breakthrough wasn't making the AI smarter. It was making the task more structured. This is what context engineering actually looks like—messy, iterative, and focused on constraints rather than capabilities.

Let me show you what it really takes to achieve a 36x productivity gain with AI. Spoiler: it's not about the AI at all.

The Real Cost of Bad Prioritization

Are you confident you're working on the right thing at this very moment? How do you know?

Most startups face a brutal choice: spend 3 hours weekly in soul-crushing triage meetings, or skip triage entirely and hope you're building the right features. Both options are expensive.

Here's my confession: I'm allergic to meetings. I literally ask, "What's the agenda? Can we cut this short?" I decline almost everything. I'm the office curmudgeon about this.

So obviously, I had to be the one to automate triage.

Why $30,000? The Opportunity Cost Nobody Calculates

Let's do the real math.

Corporate thinking: 8 people × $150/hour × 3 hours = $3,600

Startup reality: 8 people × $1,000/hour of value creation × 3 hours = $24,000+

Here's where that $1,000/hour comes from: SaaS benchmarks put revenue per employee at $130k–$250k depending on scale OpenView, SaaS Capital). Public cloud companies currently trade at ~8-9× revenue multiples (BVP Cloud Index). Back-of-the-envelope: 8.5× multiple on $200k revenue per employee, spread over 2,000 work hours, comes to ~$850/hour of enterprise value. Top-quartile teams push past $1,200/hour.

So an 8-person, 3-hour triage meeting? That's $24k–$30k of foregone value creation. Every. Single. Week.

But here's the real question: Does your CEO even come to triage? How do you ensure alignment with strategy? The dev manager who attended? Does he remember what the CEO said on Friday after his weekend at the cottage?

The Cognitive Load Trap I Didn't See Coming

Before I tell you about my six failed attempts, let me share what Maria D'Angelo taught me years ago about decision fatigue.

I'd complained to Maria once about Lever only letting me review 25 resumes at a time. "That's so inefficient," I said. "I could power through 100 if they'd let me."

Maria, with her PhD in cognitive psychology, just smiled. "Cameron, after 25 decisions, your quality drops to near zero. You think you're being efficient, but you're just making bad decisions faster."

The research on cognitive fatigue is insane. Judges give favorable rulings 65% of the time at the start of sessions. By the end? Nearly 0%. We literally get worse at decisions the more we make.

This insight would save my triage automation later. But first, I had to fail five times.

Six Iterations of Failure (And What I Learned)

Iteration 1: The Naive Automation

I asked Claude Code for help, pointed it at the docs, and built a simple slash command:

/triage

The AI would fetch issues and prioritize them. Except it kept missing issues. Out of 25 items, it would process maybe 15. Sometimes 18. Never all 25.

The model was being lazy. Or creative. It would skip around, summarize, occasionally hallucinate issues that weren't even in triage.

But the bigger problem? It had no context for our strategic goals. Like having an intern who'd never attended a strategy meeting making critical decisions.

Iterations 2-3: Still Missing Issues

I tried markdown drafts. I tried better prompting. I discovered Claude Code's "bang syntax" through a tweet from Toby at Shopify—it loads context before the AI starts processing. Each approach got slightly better, but the AI still couldn't reliably process every issue. Even with all 25 issues clearly in context, it would skip 3-4 of them.

"Oh yes, let me add that one," it would say when prompted.

Maddening.

Iteration 4: The CSV Revelation

Here's when things clicked. What if the problem wasn't the model's capability, but the format?

CSV. A spreadsheet. The most boring possible format.

But here's the thing about CSV: you can't skip a row. Row 17 exists between row 16 and row 18. The model HAS to fill every cell or leave it obviously blank.

issue_number,title,priority_recommendation,assignee_recommendation,rationale
3847,"Fix auth timeout","","",""
3848,"Update docs","","",""

Empty cells screaming to be filled. The model couldn't skip them without it being obvious.

Suddenly, 100% coverage. Every. Single. Issue.

We went from ~88% coverage to 100%. Not through better prompting. Through better structure.

Iteration 5: Strategic Context That Actually Works

We're a pre-PMF, default-dead startup. EVERY decision about what to work on is mission critical.

I gave our strategic advisor agent (trained on CEO transcripts and strategy docs) direct access to edit the CSV:

issue_number,priority_recommendation,strategic_feedback
3471,P3,"BRUTAL TRUTH: Workspace naming consistency does NOT move users from 'would use for free' to 'take my money'."
3528,P1,"CITATION REJECTION BLOCKER: This directly impacts evidence weighting - core conviction deliverable."

This was like having our CEO review every triage decision. Except available 24/7 with brutal honesty about strategic alignment.

Iteration 6: The 25-Item Enhancement

Remember Maria's insight about decision fatigue? When I hit GitHub's API limit (100 items), my instinct was to implement pagination. Then I remembered her lesson.

I set the default to 25 items:

BATCH_SIZE=${1:-25}  # Cognitive load management

We can now triage more frequently in smaller batches. Decision quality stays high. The 5-minute sessions are energizing, not exhausting.

The constraint became the feature.

The Numbers That Matter

Let's talk about what 36x really means:

Before:

  • 3 hours × 8 people = 24 person-hours
  • ~88% issue coverage (some always missed)
  • Strategic context from whoever showed up, filtered through their weekend

After:

  • 5 minutes × 1 person = 0.083 person-hours
  • 100% issue coverage (CSV forces completeness)
  • Strategic context from all documentation, always current

Do the math: 24 person-hours ÷ 0.083 person-hours = 289x reduction in person-hours. But I promised you 36x, and that's what matters—3 hours down to 5 minutes is a 36x time compression.

The real gain? $30,000 weekly opportunity cost returned to building instead of talking. $1.5M per year.

What This Actually Looks Like NOW

The entire workflow is one command:

/triage

The command loads issues into CSV, runs parallel analysis, applies strategic feedback, and presents 25 items for review. I spend 5 minutes checking the AI's work, not doing the work.

When I say "execute," it updates GitHub via GraphQL, assigns developers, and documents every decision:

# Simplified execution
mutation = f'''mutation {{
  updateProjectV2ItemFieldValue(
    input: {{ projectId: "{id}", itemId: "{item}", ... }}
  ) {{ projectV2Item {{ id }} }}
}}'''

comment_body = f'''## 🔍 Triage Analysis
**Priority**: {priority}
**Strategic Context**: {strategic_feedback}
_Triaged via CSV-based systematic analysis_'''

The Real Lesson: Constraints Are Features

I spent six iterations trying to make the AI more capable. Better prompting, more context, smarter agents.

The 36x improvement came from embracing constraints:

  • The AI skips issues? Force structure with CSV.
  • Humans have cognitive limits? Batch by 25.
  • Models lack strategic context? Give them direct document access.

This is context engineering: not making AI smarter, but understanding the shape of the problem and working with constraints rather than against them.

/triage isn't done. It's a product that evolves. There are likely 5-6 more improvements coming. But the core insight remains: the constraint isn't the enemy. The constraint is the feature.

Think about your worst recurring meeting. That meeting probably costs more than you think—not in salaries, but in opportunity cost. What could those people be building instead?

Now imagine compressing it 36x. Not by making the AI smarter, but by understanding why your current approach fights human nature—and stopping that fight.

Thanks to Maria D'Angelo for drilling cognitive load awareness into my brain, Toby from Shopify for the bang syntax insight, and GitHub for the "bug" that became our best feature.