CAMERON WESTLAND

My Corner of the Web

So How Many Developers Can I Fire?

Date:

The tech world is buzzing with dramatic predictions about AI and coding:

  • "In 12 months, we may be in a world where AI is writing essentially all of the code." - Dario Amodei, Anthropic CEO
  • "I think software engineering by the end of 2025 looks very different than software engineering at the beginning of 2025." - Sam Altman, OpenAI CEO
  • "We at Meta are going to have an AI that can effectively be a sort of mid-level engineer that you have at your company that can write code." - Mark Zuckerberg

Not everyone agrees on the magnitude of the coming change. Some tech leaders offer more measured assessments:

  • "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster." - Sundar Pichai
  • "I think the number is going to be more like 20-30% of the code could get written by AI—not 90%." - Arvind Krishna, IBM CEO

As someone who's been building and using these systems since ChatGPT's release (yes, because I'm lazy), I wanted to break down what these predictions actually mean for teams, headcount, and productivity.

What They're Saying vs. What It Actually Means

There's an important distinction to make when we hear claims that "90% of coding will be automated." Are tech leaders talking about the coding task itself, or a developer's entire job? Their language isn't always precise on this point.

When Dario Amodei says "AI is writing essentially all of the code" or Mark Zuckerberg talks about AI that "can write code," they don't explicitly clarify what portion of a developer's job they're referring to.

This distinction matters significantly. The Stripe Developer Coefficient report (2018) found that developers spend only about 25% of their work time actually writing new code (creating features from scratch). The rest goes to maintenance, addressing technical debt, debugging, meetings, design, coordination, and all the other activities that make software development work.

AI can help with many of these other tasks too - writing meeting notes, extracting decisions, architecting solutions - but the impact on headcount and team structure isn't straightforward.

If I were running my own startup, I'd be thinking: "How does my runway/use of funds change as a result of these predictions?" This is the real question many are asking when they hear these ambitious claims, even if that's not exactly what the AI labs are saying.

How Autonomous Are Today's Best AI Agents?

In my hands-on experience with cutting-edge AI coding assistants, I've found a gap between perception and reality. Despite headlines suggesting near-autonomous coding capabilities, I typically need to intervene with my AI assistants every 10 minutes on average, with interventions needed roughly 50% of the time.

These interventions vary widely in scope - from quick five-minute redirections to multi-hour debugging sessions where I ultimately had to solve complex problems myself. Yet despite this intervention frequency, I'm still generating about 90% of my code with AI assistance.

I've tried the latest dedicated AI coding agents too - I was a trial user of Devin but couldn't get it to produce good code no matter how much I tried. Current products in this space are still immature and unstable. In my experience, they might be useful for some codebases, but they're simply not good enough for production work yet.

What I've discovered is that AI gives me more "headroom" to think strategically while handling implementation details. This means I'm more productive and can confidently tackle less familiar domains by leveraging my critical thinking skills alongside AI assistance. But I'm still only working on one thing at a time - I'm not managing an army of autonomous agents.

The truth is that skilled iteration is required to get good outputs from today's models. While AI coding tools are genuinely transformative, they're not yet the autonomous agents that headlines might suggest.

Modeling the Future: What a 10x Breakthrough Might Actually Mean

But what if they are sitting on the next big breakthrough? Can we bet on that?

Let's assume a massive technological breakthrough happens - reducing the error rate from 50% to 5%. I ran some back-of-napkin math on this hypothetical future:

With a 5% error rate and variable intervention times (averaging 31 minutes, ranging from 5 minutes to 2+ hours for complex issues), one developer could potentially manage around 5 AI agents simultaneously.

This would mean working on different projects throughout the day - perhaps two in the morning, three in the afternoon, plus the inevitable bug fixes and overhead. That's an exciting productivity multiplier that could lead to an 80% reduction in development headcount (a 5:1 ratio). This is actually quite close to the 90% claims - the difference is whether you need 10% or 20% of your current developers. That's double the headcount, which matters significantly for planning, but the core message remains: AI could dramatically transform team sizes and structures.

The Economics Don't Scale (Yet)

Now let's talk costs in the context of this technical capacity model. OpenAI reportedly plans to charge up to $20,000 a month for specialized AI 'agents', according to reporting from The Information, cited by TechCrunch, which aligns with something like the agent capabilities we're discussing.

Just running inference on an H100 GPU (80GB) costs $2.49/GPU/hr at current market rates. For an agent running 24/7, that's $21,800/year in infrastructure costs alone. And you can't just have the GPU for 8 hours a day — there's fierce competition for these resources, meaning you're likely paying for them around the clock to ensure availability. Add the provider markup, and we're looking at a conservative estimate of $50,000/year per agent.

At max capacity (5 agents per developer), that's:

  • $250,000/year in agent costs
  • $150,000/year for the developer
  • Total: $400,000/year per developer-agent team

Even if the technology exists, would your company invest an extra $250,000 per developer? The capacity might be there, but organizational budgets will resist - even if the theoretical productivity gains sound impressive on paper. What's more concerning is the concentration of risk.

When something goes wrong with a traditional team, responsibility and knowledge are distributed across multiple people. In this model, you have just "one throat to choke" - a single developer plus their AI agents. If that developer leaves, gets sick, or makes a critical error, the fallout could be far more severe than in traditional team structures.

It may take 5+ years before companies are confident enough in both the ROI and risk management approaches to widely invest in this level of AI augmentation.​​​​​​​​​​​​​​​​

The Most Ambitious Technological Bet

I think the most realistic explanation isn't that AI labs are sitting on some secret breakthrough. When they say "90% of coding will be automated," they're observing how their own developers (who are naturally AI-enthusiastic) are already using these tools.

They're essentially saying: "We need to bring the current state to the world." Even with today's 50% error rate, the productivity gains are substantial.

I don't believe we're approaching a world where one developer manages five completely different projects simultaneously. What's more likely is that we're approaching a world where what previously required a team can be done by a small number of individuals.

High-Agency Teams: Already Living in the Future

But here's the dirty secret: that was already happening in the most efficient organizations. Look at Y Combinator startups - they've always had high-agency individuals building fully vertical integrated stacks. YC's rigorous selection process (with an acceptance rate lower than Harvard) has already given us a model of what high-agency developers look like. Their famous mantra "we fund founders, not ideas" acknowledges this reality - they bet on exceptional people who can execute, adapt, and deliver regardless of their initial concept.​​​​​​​​​​​​​​​​

Will Larson's "An Elegant Puzzle" prescribes that managers should support six to eight engineers and managers-of-managers should support four to six managers - creating the standard team structure in traditional organizations. YC startups buck that trend completely. The 2-3 founders are EACH effectively handling what would require multiple managers and engineering teams elsewhere - and that's the model that AI is now helping to democratize. (The irony isn't lost on me that Will worked at YC startups while writing this book)

AI is accelerating and democratizing that capability, not creating an entirely new level of productivity. These YC startups aren't going to reach some new level with AI - they're already there, already living in the future. What we're seeing is AI helping everyone else catch up.

There's a massive distortion between how the best teams operate today (where individuals function as teams of one) and how the rest of the industry works. This gap helps explain why the messaging from AI labs resonates so differently depending on who's listening.

Enterprise Opportunity: Different Scale, Same Principles

Sam Altman and Dario Amodei aren't actually speaking to these high efficiency scrappy startups when they make these pronouncements. They already have those markets. They are speaking to enterprises and governments. Why? Take one guess. These larger organizations represent massive revenue opportunities, with deep pockets and long procurement cycles - and they're the ones who will pay $20,000 a month for specialized AI agents.

This is still exciting and potentially disruptive. There are enterprises that assign entire teams to work that one engineer with state-of-the-art AI assistance could achieve. That represents a real shift in how we think about team structure and capability. The economic contrast is stark: an enterprise might spend $500,000+ yearly on a full development team, while an individual with AI assistance might cost them $150,000 plus perhaps $50,000 in AI tools - a significant savings even with today's immature technology.

The "Impossible" Bet I'd Actually Take

The obvious counterpoint here is that "big organizations can't function like YC startups - the contexts are too different." Enterprise leaders might argue that their scale, complexity, and compliance requirements necessitate larger teams with specialized roles.

But this is a bet I would actually take. The gap between what one high-agency developer with AI assistance can accomplish versus traditional team structures is growing exponentially. I've personally seen how many enterprise "teams of eight" are working on projects that could be handled by one or two developers with proper tooling and autonomy.

This isn't just theoretical. Take this observation from Malte Ubl, CTO of Vercel:

"Not sure exactly what vibe coding is, but @max_leiter of @v0 fame has been shipping more than 7 PRs per work day for the last 6 months."

That's at Vercel - a company with over 600 employees, not a scrappy three-person startup. Wouldn't you like every developer in your organization to ship 7 PRs per workday? I know I would.

The real impediment isn't technical capability but organizational inertia. Enterprises have built processes, hierarchies, and approval workflows that assume traditional team structures. Shifting to an AI-augmented "high-agency individuals" model requires rethinking governance and trust, not just deploying new tools.

The organizations that figure this out will have an enormous competitive advantage - able to ship features in weeks that would take their competitors quarters, with dramatically lower costs.

Practical Reality Check

I recognize this perspective might be disappointing to some YC startup CEOs who were hoping to get 10x more efficient in the next 12 months. But even a 2x productivity gain is revolutionary if you can achieve it consistently and at scale.

Of course, cutting-edge research in areas like self-healing code, advanced prompt engineering, and agent orchestration continues to push boundaries. There are labs making genuine breakthroughs every month. But there's a vast difference between research advances and reliable, production-ready tools that consistently deliver value. The gap between "it works in our lab" and "it works in your complex production environment" remains substantial - and that's where the 50% error rate reality comes in.

In the end, I suspect the "90% automated" claim isn't about a coming quantum leap in agent capability, but rather the widespread adoption of tools that are already changing how developers like me work today - tools that, despite their limitations, are already powerful enough to transform our industry. It's the scrappy, high-agency style that AI will democratize—and the enterprise players who adapt that mindset will reap the biggest rewards.