CAMERON WESTLAND

The Translation Layer

Dec 17, 2025

Why some AI feels like translation—and some feels like replacement

TL;DR — AI gets adopted fastest when it translates human judgment into a different format (notes → worksheet, jargon → plain English, transcript → summary). It gets scary when it replaces the thing you were paid to produce.

I’m at Nathan Phillips Square watching my kid’s class skate. A handful of parents, a couple of teachers, and a bunch of kids lapping the rink like it’s their job.

We took the subway over together, so we’re doing the usual small talk. Inevitably someone asks what I do. I say I’m a product engineer working in AI.

And then—every time—the conversation opens up.

I didn’t have to pull any of this out of people. They wanted to talk about it. Here’s what I heard in about an hour.

Five stories from one rink

1) “Should they go back to school?”

One parent told me their spouse is a PHP developer. They were laid off about a year ago and haven’t found work since.

The question quickly became: do they go back to school—maybe a graduate program at U of T—and pivot into AI?

Toronto actually has a real advantage here. The ecosystem is tight: U of T, the Vector Institute, startups, and big companies all overlap in ways you don’t always see in other cities.

But there’s also a harder truth: there’s never been more free material online if you’re self‑motivated (Andrew Ng’s foundational Machine Learning Specialization is still a great place to start). At the same time, companies aren’t “handing out jobs for people to learn” the way they did during the zero‑interest‑rate era. That chapter feels closed.

2) “I just got 60,000 lines of AI code dumped on me.”

Another parent’s spouse works in Python at a large company. Their anecdote: 60,000 lines of AI‑generated code landed in their lap… to review.

As a developer, that’s the nightmare scenario: someone else got the speed, and you inherited the cognitive load.

I’ve written about this dynamic before—most directly in The Externality of AI Velocity and again in From Generic Agents to Human‑First Sub‑Agents. The theme is the same: if your “productivity” creates work for your teammates, it’s not productivity—it’s an externality.

The part that stuck with me was how casual this story was. Spouse‑to‑spouse, rink‑side. That tells you it’s happening often enough to become dinner table conversation.

3) “I’m an illustrator.”

That same parent is also an illustrator.

And the new image models are… genuinely remarkable. If you’ve tried the recent crop, you know we’ve crossed a threshold. Google’s Nano Banana Pro announcement and OpenAI’s new ChatGPT Images release are just the latest public signals of how fast the floor is rising.

Illustration—graphic design, concept art, marketing assets—are industries hungry for speed. The economics are brutal. The tooling is starting to line up with those incentives.

I told them the honest thing: this is real.

We love to say “people reskill and new jobs emerge.” That’s directionally true. But if you dig into any real technological transition, there are also victims—real people who don’t recover. I might have been talking to one of them.

4) “Our pharma company is all‑in.”

The parent in pharma had a completely different vibe: their company is aggressively adopting.

Everyone has a private “work” instance of ChatGPT with internal data (the basic enterprise pattern behind something like ChatGPT Enterprise). They use Copilot for meeting notes. And the most interesting part: there’s a strategic initiative where every employee builds their own agent, submits it for review, and the company maintains an internal marketplace.

That’s not a toy. That’s a serious bet on compounding internal leverage.

5) The teachers: “It’s complicated… but it’s helping.”

The teachers are under siege in one sense—kids with laptops and browsers can go to ChatGPT, and policing that is messy.

But what surprised me was the positive sentiment.

One teacher described how they used to have unstructured notes for a lesson. Turning that into a clean worksheet meant either:

  • investing unpaid hours after school, or
  • hunting for something online that was close enough.

Now they feed the same notes into an AI, get a structured first draft, and iterate. The output is better, and the time cost drops drastically. There’s even survey data suggesting teachers who use AI weekly report meaningful time savings (Gallup write‑up).

Another example I heard: report card comments. There’s pedagogical language that teachers understand, but parents often don’t. Teachers can write what they mean in their own words, then use AI to translate it into something clearer and more accessible.

This is the part that stuck with me.

Translation vs. replacement

Nobody freaks out about language translation anymore. English → French is normal. It’s boring. It’s infrastructure.

But AI feels existential for some jobs and not others. Why?

I think the difference is whether AI is acting as a translation layer or as a replacement.

Translation layer

A translation layer helps you re-express human judgment in a different format:

  • teacher notes → a worksheet
  • “teacher language” → “parent language”
  • a dense transcript → a crisp summary
  • a rough idea → a clearer articulation

The teacher isn’t being replaced. They still know their students. They still have taste. They still own the judgment. The AI is mostly doing restructuring and rewriting.

(That framing is basically the same idea I wrote about in Why LLMs Might Make Better Editors Than Creators.)

Replacement

Replacement is when the model outputs the deliverable directly.

For the illustrator, the thing they’ve historically been paid to produce—assets—can now be generated (and iterated) by someone else, cheaply, instantly, and “good enough” more often than people want to admit.

That’s not “help me translate my vision faster.” That’s “the asset exists without you.”

The failure mode: treating replacement tools like translation tools

The 60,000 lines of AI‑generated code story is what happens when someone treats a replacement tool like a translation tool.

They “translated intent into code,” but they didn’t do the work of understanding it. The cognitive burden got externalized onto a colleague. Speed for one person, mess for another.

Where I sit in this

My own work mostly lives in the translation layer.

At Tilt, we build tools for wealth managers and advisors. Financial professionals already use ChatGPT—not for stock predictions, but for understanding:

  • drop in an earnings call transcript
  • summarize it
  • ask questions
  • extract risks, themes, open questions

That’s translation: taking dense information and making it usable in a different context.

That pharma company’s internal agent marketplace is the same bet, scaled up: translation layers everywhere, compounding into something significant.

The uncomfortable part

I don’t know where the PHP developer lands on this spectrum.

The optimistic read is that AI fluency becomes the new literacy—learn the tools, and you become more valuable.

The pessimistic read is that the line between “learn to use the tool” and “the tool does your job” keeps moving. And you don’t always know which side you’re standing on until it’s too late.

What I do know is that I heard all of these stories in one afternoon, unprompted, at a skating rink. People are hungry to have this conversation. They’re trying to figure out which side of the line they’re on.