Date:
Sam Altman posted about the future of AI recently—his "Three Observations"—and I found myself thinking about it this morning while lying in bed. His core message was that by 2035, anyone should be able to harness the skills of anyone from 2025. Think about that: someone who can barely read having access to the capabilities of a Harvard lawyer. That's a massive shift.
I've been playing with OpenAI's latest tools, and it got me thinking about where we're actually heading. Here are three things that have been rattling around in my head.
I got access to Deep Research through my company. What's interesting isn't that I used it for work—that was the point—but how quickly I started using it for everything else in my life.
Almost overnight, I was throwing all kinds of personal questions at it. Need to find summer camps for my kids in Toronto? Deep Research. Negotiating rent for a one-bedroom apartment? I had it run a market analysis of rental distributions in my neighborhood. Weekly meal planning? It didn't just plan the meals—it priced everything out at No Frills and found price matches at other stores.
I even used it to research coffee because I thought I was paying too much. It analyzed everything: bulk options, Costco deals, pre-ground versus whole bean, even instant coffee with different flavor profiles. Who would ever hire a researcher for that? Nobody—it would be ridiculous. But at $200 a month? I'm going to use it for everything.
What's wild is how fast your behavior changes when you have these tools. If we had hired a full-time researcher at my company, I would never have used them to figure out what snow salt to buy or how to price my scooter on Facebook Marketplace. But because Deep Research is there and relatively cheap, I use it for all this stuff.
Think about what happens when my mom starts using Deep Research. When everyone has this level of analysis for every decision they make. The professions that rely on deep research as their bread and butter—law, medicine, academia—are going to see some interesting changes.
It's like everyone who has that one friend who knows everything about phones—they're your go-to person because they've done the research, right? They're constantly thinking about that topic and following new developments. Now imagine having that level of expertise about everything, all the time.
I use Cursor all day—what people are calling "vibe coding" now. And look, I have some serious concerns about Sam's vision of having a thousand AI agents working for you.
Here's the thing: I have to watch Cursor like a hawk. Not because it might do anything nefarious, but because it sucks sometimes. It makes a lot of mistakes. The only reason it works at all is that we can quickly test and adjust—run the code, check the types, fix the bugs. But man, it tries some really bad things sometimes. It gets forgetful. It has all these quirks that are supposedly going to be solved problems, but we're not there yet.
If I turned up the speed—which is effectively what you're doing when you go parallel with multiple agents—I wouldn't have enough time in the day to make sure it did the right thing. And at a thousand agents? Who's going to read all that code? Other AIs? And who's going to trust those AIs?
It's like having a speed limit. We can build cars that go way faster, sure. You could get from Toronto to New York at 500 kilometers per hour. But we have speed limits because it's not safe to go faster. I wonder if we're going to need something similar for AI-generated code—some kind of practical limit because the cost of verification doesn't scale down as fast as the cost of generation.
The tools are changing how we work and make decisions right now. That's clear. But scaling everything up isn't as simple as just adding more AI agents. The bugs will get more subtle. The verification problems will get harder.
Sam's vision of 2035 is compelling, but getting there safely might be trickier than we think. It's not just about making AI faster and cheaper—it's about figuring out how to manage it when it is.