Matt Shumer Is Right. Now What?

Matt Shumer Is Right. Now What?

Matt Shumer's essay "Something Big Is Happening" went viral this week — 80+ million views, Fortune syndication, a CNBC appearance. If you haven't read it, the short version: a six-year AI startup founder wrote the most honest thing he could to his friends and family about what AI can already do and where it's headed. He compared this moment to February 2020. The cocktail-party version isn't cutting it anymore.

I shared it with my team when it dropped and told them I pretty much 100% agree with him. Good for him for writing it. We are going down the rabbit hole.

But here's what's been rattling around in my head since: Shumer nailed the diagnosis. What I can't stop thinking about is the prescription.

The Diagnosis Is Correct

The core facts aren't controversial among people actually building with AI every day. The top models are better than people think, and they are improving faster than even us AI builders expect. The capability curve is steep and shows no signs of flattening. The gap between what these systems can do and what most people think they can do is enormous and growing.

I know this because I live in it. I've spent the last 18 months building nothing but AI products. I use these models every day, all day. The February 5th model releases were just another step on this inevitable journey.

Shumer is right that this is not a drill.

But "Adapt or Die" Is the Wrong Frame

Here's where I part from the conversation — not from Shumer specifically, but from the entire discourse that's exploded around his piece.

The responses have fallen into predictable camps. Skeptics are debating timelines and pointing out that "capability ≠ adoption" (fair, but missing the forest). Optimists are saying "new jobs will emerge" (historically true, but the speed question is genuinely different this time). And Shumer's own advice, while solid, is individual: use AI an hour a day, get your finances in order, rethink what you tell your kids.

All useful. None sufficient.

Because the question that actually matters isn't "can I personally keep up?" It's: can we build systems where the gains from AI are broadly shared — or do they inevitably concentrate among the people who already have the most leverage?

That's not a rhetorical question. It's a design question. And the answer isn't determined by the models themselves. It's determined by the infrastructure we build around them and the economic logic we embed into that infrastructure.

The Narrow Path

I've been asking myself whether there's a version of this that becomes an unmitigated good. Not just for founders, not just for people in tech, but for the independent business owner, the employee, the economy at large. I think of it as the narrow path — the sequence of choices that threads between the dystopias.

I don't have the full map. Nobody does. But I've been thinking a lot about what has to be true for us to find it, and a few things keep coming up.

The economics have to flow back to the people creating the value. When AI makes a business more productive, who captures that gain? If the answer is primarily the AI platform and its shareholders, we've just built a more sophisticated version of the same extractive pattern. The narrow path means building systems where the people using AI — running the businesses, doing the work, contributing the knowledge — participate meaningfully in the value it creates.

Human agency has to be a design principle, not a compliance requirement. There's a version of AI adoption that's "automate everything, remove the humans, cut costs." It's the obvious version. It's also the one that leads to the worst outcomes for the most people. The narrow path runs through a less obvious insight: that for most real-world business applications, humans and AI working together outperform AI alone. Not because we're sentimental about employment — because judgment, context, accountability, and trust still matter, and they're still human.

Who the AI works for has to be unambiguous. This is the one that sounds simple and isn't. Your AI's behavior is shaped by the business model of whoever provides it. If the AI is monetized through your attention, it will optimize for engagement. If it's monetized through your data, it will optimize for extraction. If you pay for it directly as a tool that serves your interests, it optimizes for your outcomes. The narrow path requires AI where the answer to "who does this serve?" is obvious, and it's you.

None of these are new ideas individually. But I don't see anyone stitching them together into a coherent system. Most of the AI conversation treats these as separate policy debates or philosophical talking points. I think they're engineering problems — and I think they're solvable.

I'm working on what I believe that system looks like, including these ideas and many others. This is important, and it's what keeps me working. I'm realistic about my chances, but I haven't gotten to where I am by giving up.

What I'd Tell the People Shumer Is Writing For

If you're one of the "friends and family" Shumer is addressing — someone outside of tech who's trying to make sense of this — I'd add two things to his advice.

First: don't just learn to use AI. Pay attention to who is building it and why. Not all AI is built to serve your interests. Some of it is built to serve the interests of the platform that provides it. That distinction is going to matter more and more as these tools become central to how you work, make decisions, and run your business.

Second: the outcome isn't predetermined. The technology is coming whether we're ready or not — Shumer is right about that. But there's a difference between a world where AI is something that happens to you and a world where it's something that works for you. Which one we get depends on the systems we choose to build on.

We're at the fork. The narrow path exists. But the window to choose it is narrowing quickly.


*I'm building Force Multiplier AI and documenting the journey here. More on the narrow path soon.